View source on GitHub |
Mapping from logical cores in a computation to the physical TPU topology.
tf.tpu.experimental.DeviceAssignment(
topology, core_assignment
)
Prefer to use the DeviceAssignment.build()
helper to construct a
DeviceAssignment
; it is easier if less flexible than constructing a
DeviceAssignment
directly.
topology
: A Topology
object that describes the physical TPU topology.core_assignment
: A logical to physical core mapping, represented as a
rank 3 numpy array. See the description of the core_assignment
property for more details.core_assignment
: The logical to physical core mapping.
num_cores_per_replica
: The number of cores per replica.
num_replicas
: The number of replicas of the computation.
topology
: A Topology
that describes the TPU topology.
ValueError
: If topology
is not Topology
object.ValueError
: If core_assignment
is not a rank 3 numpy array.build
@staticmethod
build(
topology, computation_shape=None, computation_stride=None, num_replicas=1
)
coordinates
coordinates(
replica, logical_core
)
Returns the physical topology coordinates of a logical core.
host_device
host_device(
replica=0, logical_core=0, job=None
)
Returns the CPU device attached to a logical core.
lookup_replicas
lookup_replicas(
task_id, logical_core
)
Lookup replica ids by task number and logical core.
task_id
: TensorFlow task number.logical_core
: An integer, identifying a logical core.A sorted list of the replicas that are attached to that task and logical_core.
ValueError
: If no replica exists in the task which contains the logical
core.tpu_device
tpu_device(
replica=0, logical_core=0, job=None
)
Returns the name of the TPU device assigned to a logical core.
tpu_ordinal
tpu_ordinal(
replica=0, logical_core=0
)
Returns the ordinal of the TPU device assigned to a logical core.