tf.contrib.distribute.CollectiveAllReduceStrategy

Class CollectiveAllReduceStrategy

Inherits From: Strategy

Defined in tensorflow/contrib/distribute/python/collective_all_reduce_strategy.py.

Distribution strategy that uses collective ops for all-reduce.

It is similar to the MirroredStrategy but it uses collective ops for reduction.

When cluster_spec is given by the configure method, it turns into the mulit-worker version that works on multiple workers with between-graph replication.

__init__

__init__(num_gpus_per_worker=0)

Initializes the object.

Args:

  • num_gpus_per_worker: number of local GPUs or GPUs per worker, the default is 0 meaning CPU only.

Properties

extended

tf.distribute.StrategyExtended with additional methods.

num_replicas_in_sync

Returns number of replicas over which gradients are aggregated.

Methods

tf.contrib.distribute.CollectiveAllReduceStrategy.__deepcopy__

__deepcopy__(memo)

tf.contrib.distribute.CollectiveAllReduceStrategy.experimental_finalize

experimental_finalize()

Any final actions to be done at the end of all computations.

In eager mode, it executes any finalize actions as a side effect. In graph mode, it creates the finalize ops and returns them.

For example, TPU shutdown ops.

Returns:

A list of ops to execute.

tf.contrib.distribute.CollectiveAllReduceStrategy.experimental_initialize

experimental_initialize()

Any initialization to be done before running any computations.

In eager mode, it executes any initialization as a side effect. In graph mode, it creates the initialization ops and returns them.

For example, TPU initialize_system ops.

Returns:

A list of ops to execute.

tf.contrib.distribute.CollectiveAllReduceStrategy.experimental_run

experimental_run(
    fn,
    input_iterator=None
)

Runs ops in fn on each replica, with inputs from input_iterator.

When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica.

Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator.

fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group.

IMPORTANT: Depending on the DistributionStrategy being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).

Args:

  • fn: function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
  • input_iterator: (Optional) input iterator from which the inputs are taken.

Returns:

Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).

tf.contrib.distribute.CollectiveAllReduceStrategy.make_dataset_iterator

make_dataset_iterator(dataset)

Makes an iterator for input provided via input_dataset.

Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas.

The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.

Args:

  • dataset: tf.data.Dataset that will be distributed evenly across all replicas.

Returns:

An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.

tf.contrib.distribute.CollectiveAllReduceStrategy.make_input_fn_iterator

make_input_fn_iterator(
    input_fn,
    replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)

Returns an iterator split across replicas created from an input function.

The input_fn should take an tf.distribute.InputContext object where information about input sharding can be accessed:

def input_fn(input_context):
  d = tf.data.Dataset.from_tensors([[1.]]).repeat()
  return d.shard(input_context.num_input_pipelines,
                 input_context.input_pipeline_id)
with strategy.scope():
  iterator = strategy.make_input_fn_iterator(
      input_fn)
  replica_results = strategy.extended.call_for_each_replica(
      replica_fn, iterator.get_next())

Args:

Returns:

An iterator object that can be initialized and fetched next element.

tf.contrib.distribute.CollectiveAllReduceStrategy.reduce

reduce(
    reduce_op,
    value
)

Reduce value across replicas.

Args:

  • reduce_op: A tf.distribute.ReduceOp value specifying how values should be combined.
  • value: A "per replica" value to be combined into a single tensor.

Returns:

A Tensor.

tf.contrib.distribute.CollectiveAllReduceStrategy.scope

scope()

Returns a context manager selecting this Strategy as current.

Inside a with strategy.scope(): code block, this thread will use a variable creator set by strategy, and will enter its "cross-replica context".

Returns:

A context manager.

tf.contrib.distribute.CollectiveAllReduceStrategy.update_config_proto

update_config_proto(config_proto)

Returns a copy of config_proto modified for use with this strategy.

The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.

Args:

Returns:

The updated copy of the config_proto.