View source on GitHub |
Cluster Resolver for Google Cloud TPUs.
Inherits From: ClusterResolver
tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=None,
zone=None,
project=None,
job_name='worker',
coordinator_name=None,
coordinator_address=None,
credentials='default',
service=None,
discovery_url=None
)
This is an implementation of cluster resolvers for the Google Cloud TPU service.
TPUClusterResolver supports the following distinct environments: Google Compute Engine Google Kubernetes Engine Google internal
It can be passed into tf.distribute.TPUStrategy
to support TF2 training on
Cloud TPUs.
Raises | |
---|---|
ImportError
|
If the googleapiclient is not installed. |
ValueError
|
If no TPUs are specified. |
RuntimeError
|
If an empty TPU name is specified and this is running in a Google Cloud environment. |
Attributes | |
---|---|
environment
|
Returns the current environment which TensorFlow is running in. |
task_id
|
Returns the task id this ClusterResolver indicates.
In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example,
Returns For more information, please see
|
task_type
|
Returns the task type this ClusterResolver indicates.
In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example,
Returns For more information, please see
|
tpu_hardware_feature
|
Returns the tpu topology info stored. |
Methods
cluster_spec
cluster_spec()
Returns a ClusterSpec object based on the latest TPU information.
We retrieve the information from the GCE APIs every time this method is called.
Returns | |
---|---|
A ClusterSpec containing host information returned from Cloud TPUs, or None. |
Raises | |
---|---|
RuntimeError
|
If the provided TPU is not healthy. |
connect
@staticmethod
connect( tpu=None, zone=None, project=None )
Initializes TPU and returns a TPUClusterResolver.
This API will connect to remote TPU cluster and initialize the TPU hardwares. Example usage:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver.connect(
tpu='')
It can be viewed as convenient wrapper of the following code:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
Args | |
---|---|
tpu
|
A string corresponding to the TPU to use. It can be the TPU name or TPU worker gRPC address. If not set, it will try automatically resolve the TPU address on Cloud TPUs. |
zone
|
Zone where the TPUs are located. If omitted or empty, we will assume that the zone of the TPU is the same as the zone of the GCE VM, which we will try to discover from the GCE metadata service. |
project
|
Name of the GCP project containing Cloud TPUs. If omitted or empty, we will try to discover the project name of the GCE VM from the GCE metadata service. |
Returns | |
---|---|
An instance of TPUClusterResolver object. |
Raises | |
---|---|
NotFoundError
|
If no TPU devices found in eager mode. |
get_job_name
get_job_name()
get_master
get_master()
get_tpu_system_metadata
get_tpu_system_metadata()
Returns the metadata of the TPU system.
Users can call this method to get some facts of the TPU system, like total number of cores, number of TPU workers and the devices. E.g.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tpu_system_metadata = resolver.get_tpu_system_metadata()
num_hosts = tpu_system_metadata.num_hosts
Returns | |
---|---|
A tf.tpu.experimental.TPUSystemMetadata object.
|
master
master(
task_type=None, task_id=None, rpc_layer=None
)
Get the Master string to be used for the session.
In the normal case, this returns the grpc path (grpc://1.2.3.4:8470) of first instance in the ClusterSpec returned by the cluster_spec function.
If a non-TPU name is used when constructing a TPUClusterResolver, that will be returned instead (e.g. If the tpus argument's value when constructing this TPUClusterResolver was 'grpc://10.240.1.2:8470', 'grpc://10.240.1.2:8470' will be returned).
Args | |
---|---|
task_type
|
(Optional, string) The type of the TensorFlow task of the master. |
task_id
|
(Optional, integer) The index of the TensorFlow task of the master. |
rpc_layer
|
(Optional, string) The RPC protocol TensorFlow should use to communicate with TPUs. |
Returns | |
---|---|
string, the connection string to use when creating a session. |
Raises | |
---|---|
ValueError
|
If none of the TPUs specified exists. |
num_accelerators
num_accelerators(
task_type=None, task_id=None, config_proto=None
)
Returns the number of TPU cores per worker.
Connects to the master and list all the devices present in the master, and counts them up. Also verifies that the device counts per host in the cluster is the same before returning the number of TPU cores per host.
Args | |
---|---|
task_type
|
Unused. |
task_id
|
Unused. |
config_proto
|
Used to create a connection to a TPU master in order to retrieve the system metadata. |
Raises | |
---|---|
RuntimeError
|
If we cannot talk to a TPU worker after retrying or if the number of TPU devices per host is different. |
set_tpu_topology
set_tpu_topology(
serialized_tpu_topology
)
Sets the tpu topology info stored in this resolver.
__enter__
__enter__()
__exit__
__exit__(
type, value, traceback
)