tf.contrib.tpu.TPUDistributionStrategy
Stay organized with collections
Save and categorize content based on your preferences.
The strategy to run Keras model on TPU.
tf.contrib.tpu.TPUDistributionStrategy(
tpu_cluster_resolver=None, using_single_core=False
)
Args |
tpu_cluster_resolver
|
Any instance of TPUClusterResolver . If None, will
create one with '' as master address.
|
using_single_core
|
Bool. This is the debugging option, which might be
removed in future once the model replication functionality is mature
enough. If False (default behavior), the system automatically finds
the best configuration, in terms of number of TPU cores, for the model
replication, typically using all available TPU cores. If overwrites as
True , force the model replication using single core, i.e., no
replication.
|
Raises |
Exception
|
No TPU Found on the given worker.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[]]