tf.contrib.tpu.device_assignment(
topology,
computation_shape=None,
computation_stride=None,
num_replicas=1
)
Defined in tensorflow/contrib/tpu/python/tpu/device_assignment.py.
Computes a device_assignment of a computation across a TPU topology.
Attempts to choose a compact grid of cores for locality.
Returns a DeviceAssignment that describes the cores in the topology assigned
to each core of each replica.
computation_shape and computation_stride values should be powers of 2 for
optimal packing.
Args:
topology: ATopologyobject that describes the TPU cluster topology. To obtain a TPU topology, evaluate theTensorreturned byinitialize_systemusingSession.run. Either a serializedTopologyProtoor aTopologyobject may be passed. Note: you must evaluate theTensorfirst; you cannot pass an unevaluatedTensorhere.computation_shape: A rank 1 int32 numpy array with size equal to the topology rank, describing the shape of the computation's block of cores. If None, thecomputation_shapeis[1] * topology_rank.computation_stride: A rank 1 int32 numpy array of sizetopology_rank, describing the inter-core spacing of thecomputation_shapecores in the TPU topology. If None, thecomputation_strideis[1] * topology_rank.num_replicas: The number of computation replicas to run. The replicas will be packed into the free spaces of the topology.
Returns:
A DeviceAssignment object, which describes the mapping between the logical cores in each computation replica and the physical cores in the TPU topology.
Raises:
ValueError: Iftopologyis not a validTopologyobject.ValueError: Ifcomputation_shapeorcomputation_strideare not 1D int32 numpy arrays with shape [3] where all values are positive.ValueError: If computation's replicas cannot fit into the TPU topology.