tf.experimental.dtensor.barrier
Stay organized with collections
Save and categorize content based on your preferences.
Runs a barrier on the mesh.
tf.experimental.dtensor.barrier(
mesh: tf.experimental.dtensor.Mesh
,
barrier_name: Optional[str] = None
)
Upon returning from the barrier, all operations run before the barrier
would have completed across all clients. Currently we allocate a fully
sharded tensor with mesh shape and run an all_reduce on it.
Example:
A barrier can be used before application exit to ensure completion of pending
ops.
x = [1, 2, 3]
x = dtensor.relayout(x, dtensor.Layout.batch_sharded(mesh, 'batch', 1))
dtensor.barrier(mesh)
# At this point all devices on all clients in the mesh have completed
# operations before the barrier. Therefore it is OK to tear down the clients.
sys.exit()
Args |
mesh
|
The mesh to run the barrier on.
|
barrier_name
|
The name of the barrier. mainly used for logging purpose.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-03-17 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-03-17 UTC."],[],[]]