مهاجرت آموزش تک کارگر چند GPU

این راهنما نشان می‌دهد که چگونه می‌توان جریان‌های کاری چندگانه تک‌کارگر را از TensorFlow 1 به TensorFlow 2 منتقل کرد.

برای انجام آموزش همزمان در چندین GPU در یک دستگاه:

برپایی

با واردات و یک مجموعه داده ساده برای اهداف نمایشی شروع کنید:

import tensorflow as tf
import tensorflow.compat.v1 as tf1
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels
= [[0.3], [0.5], [0.7]]
eval_features
= [[4., 4.5], [5., 5.5], [6., 6.5]]
eval_labels
= [[0.8], [0.9], [1.]]

TensorFlow 1: آموزش توزیع شده تک کارگری با tf.estimator.Estimator

این مثال گردش کار متعارف TensorFlow 1 آموزش تک کارگری چند GPU را نشان می دهد. شما باید استراتژی توزیع ( tf.distribute.MirroredStrategy ) را از طریق پارامتر config tf.estimator.Estimator :

def _input_fn():
 
return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)

def _eval_input_fn():
 
return tf1.data.Dataset.from_tensor_slices(
     
(eval_features, eval_labels)).batch(1)

def _model_fn(features, labels, mode):
  logits
= tf1.layers.Dense(1)(features)
  loss
= tf1.losses.mean_squared_error(labels=labels, predictions=logits)
  optimizer
= tf1.train.AdagradOptimizer(0.05)
  train_op
= optimizer.minimize(loss, global_step=tf1.train.get_global_step())
 
return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)

strategy
= tf1.distribute.MirroredStrategy()
config
= tf1.estimator.RunConfig(
    train_distribute
=strategy, eval_distribute=strategy)
estimator
= tf1.estimator.Estimator(model_fn=_model_fn, config=config)

train_spec
= tf1.estimator.TrainSpec(input_fn=_input_fn)
eval_spec
= tf1.estimator.EvalSpec(input_fn=_eval_input_fn)
tf1
.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Initializing RunConfig with distribution strategies.
INFO:tensorflow:Not using Distribute Coordinator.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp5g_f_ufk
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp5g_f_ufk', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.python.distribute.mirrored_strategy.MirroredStrategyV1 object at 0x7f6853562450>, '_device_fn': None, '_protocol': None, '_eval_distribute': <tensorflow.python.distribute.mirrored_strategy.MirroredStrategyV1 object at 0x7f6853562450>, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:374: UserWarning: To make it possible to preserve tf.data options across serialization boundaries, their implementation has moved to be part of the TensorFlow graph. As a consequence, the options value is in general no longer known at graph construction time. Invoking this method in graph mode retains the legacy behavior of the original implementation, but note that the returned value might not reflect the actual value of the options.
  warnings.warn("To make it possible to preserve tf.data options across "
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/training/adagrad.py:77: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/util.py:95: DistributedIteratorV1.initialize (from tensorflow.python.distribute.input_lib) is deprecated and will be removed in a future version.
Instructions for updating:
Use the iterator's `initializer` property instead.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmp5g_f_ufk/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
2021-09-22 20:15:43.228503: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorFromStringHandle' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorFromStringHandle} }
    .  Registered:  device='CPU'

2021-09-22 20:15:43.229960: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorGetNextFromShard' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorGetNextFromShard} }
    .  Registered:  device='CPU'
INFO:tensorflow:loss = 0.14477473, step = 0
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3...
INFO:tensorflow:Saving checkpoints for 3 into /tmp/tmp5g_f_ufk/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /replica:0/task:0/device:CPU:0 then broadcast to ('/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Starting evaluation at 2021-09-22T20:15:43
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmp5g_f_ufk/model.ckpt-3
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 0.17626s
INFO:tensorflow:Finished evaluation at 2021-09-22-20:15:44
INFO:tensorflow:Saving dict for global step 3: global_step = 3, loss = 1.1251448
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 3: /tmp/tmp5g_f_ufk/model.ckpt-3
INFO:tensorflow:Loss for final step: 0.45722964.
2021-09-22 20:15:44.095116: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorFromStringHandle' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorFromStringHandle} }
    .  Registered:  device='CPU'

2021-09-22 20:15:44.096454: W tensorflow/core/grappler/utils/graph_view.cc:836] No registered 'MultiDeviceIteratorGetNextFromShard' OpKernel for GPU devices compatible with node { {node MultiDeviceIteratorGetNextFromShard} }
    .  Registered:  device='CPU'
({'loss': 1.1251448, 'global_step': 3}, [])

TensorFlow 2: آموزش تک کارگری با Keras

هنگام مهاجرت به TensorFlow 2، می‌توانید از APIهای Keras با tf.distribute.MirroredStrategy استفاده کنید.

اگر از API های tf.keras برای ساختن مدل و Keras Model.fit برای آموزش استفاده می کنید، تفاوت اصلی در نمونه سازی مدل Keras، یک بهینه ساز و معیارها در زمینه Strategy.scope است، به جای تعریف یک config برای tf.estimator.Estimator .

اگر نیاز به استفاده از یک حلقه آموزشی سفارشی دارید، راهنمای Using tf.distribute.Strategy with custom training loops را بررسی کنید.

dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
eval_dataset
= tf.data.Dataset.from_tensor_slices(
     
(eval_features, eval_labels)).batch(1)
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
  model
= tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
  optimizer
= tf.keras.optimizers.Adagrad(learning_rate=0.05)

model
.compile(optimizer=optimizer, loss='mse')
model
.fit(dataset)
model
.evaluate(eval_dataset, return_dict=True)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
2021-09-22 20:15:44.265351: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:695] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_2"
op: "TensorSliceDataset"
input: "Placeholder/_0"
input: "Placeholder/_1"
attr {
  key: "Toutput_types"
  value {
    list {
      type: DT_FLOAT
      type: DT_FLOAT
    }
  }
}
attr {
  key: "output_shapes"
  value {
    list {
      shape {
        dim {
          size: 2
        }
      }
      shape {
        dim {
          size: 1
        }
      }
    }
  }
}
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
3/3 [==============================] - 2s 3ms/step - loss: 0.2363
2021-09-22 20:15:46.836745: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:695] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_2"
op: "TensorSliceDataset"
input: "Placeholder/_0"
input: "Placeholder/_1"
attr {
  key: "Toutput_types"
  value {
    list {
      type: DT_FLOAT
      type: DT_FLOAT
    }
  }
}
attr {
  key: "output_shapes"
  value {
    list {
      shape {
        dim {
          size: 2
        }
      }
      shape {
        dim {
          size: 1
        }
      }
    }
  }
}
3/3 [==============================] - 1s 3ms/step - loss: 0.0079
{'loss': 0.007883546873927116}

مراحل بعدی

برای کسب اطلاعات بیشتر در مورد آموزش توزیع شده با tf.distribute.MirroredStrategy در TensorFlow 2، مستندات زیر را بررسی کنید: