View source on GitHub |
Mapping from logical cores in a computation to the physical TPU topology.
tf.tpu.experimental.DeviceAssignment(
topology: tf.tpu.experimental.Topology
,
core_assignment: np.ndarray
)
Prefer to use the DeviceAssignment.build()
helper to construct a
DeviceAssignment
; it is easier if less flexible than constructing a
DeviceAssignment
directly.
Raises | |
---|---|
ValueError
|
If topology is not Topology object.
|
ValueError
|
If core_assignment is not a rank 3 numpy array.
|
Methods
build
@classmethod
build( topology:
tf.tpu.experimental.Topology
, computation_shape: Optional[np.ndarray] = None, computation_stride: Optional[np.ndarray] = None, num_replicas: int = 1, device_order_mode:tf.tpu.experimental.DeviceOrderMode
= DeviceOrderMode.AUTO ) -> 'DeviceAssignment'
coordinates
coordinates(
replica: int, logical_core: int
) -> Tuple
Returns the physical topology coordinates of a logical core.
host_device
host_device(
replica: int = 0, logical_core: int = 0, job: Optional[str] = None
) -> str
Returns the CPU device attached to a logical core.
lookup_replicas
lookup_replicas(
task_id: int, logical_core: int
) -> List[int]
Lookup replica ids by task number and logical core.
Args | |
---|---|
task_id
|
TensorFlow task number. |
logical_core
|
An integer, identifying a logical core. |
Returns | |
---|---|
A sorted list of the replicas that are attached to that task and logical_core. |
Raises | |
---|---|
ValueError
|
If no replica exists in the task which contains the logical core. |
tpu_device
tpu_device(
replica: int = 0, logical_core: int = 0, job: Optional[str] = None
) -> str
Returns the name of the TPU device assigned to a logical core.
tpu_ordinal
tpu_ordinal(
replica: int = 0, logical_core: int = 0
) -> int
Returns the ordinal of the TPU device assigned to a logical core.