TensorFlow.org で実行 | Google Colabで実行 | GitHubでソースを表示 | ノートブックをダウンロード |
このチュートリアルでは、NumPy 配列から tf.data.Dataset
にデータを読み込む例を示します。
この例では、MNIST データセットを .npz
ファイルから読み込みますが、 NumPy 配列がどこに入っているかは重要ではありません。
設定
import numpy as np
import tensorflow as tf
2024-01-11 17:59:40.945859: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-01-11 17:59:40.945907: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-01-11 17:59:40.947469: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
.npz
ファイルからのロード
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11490434/11490434 [==============================] - 0s 0us/step
tf.data.Dataset
を使って NumPy 配列をロード
サンプルの配列と対応するラベルの配列があるとします。 tf.data.Dataset.from_tensor_slices
にこれら2つの配列をタプルとして入力し、tf.data.Dataset
を作成します。
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
データセットの使用
データセットのシャッフルとバッチ化
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
モデルの構築と訓練
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
Epoch 1/10 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1704995986.474491 29733 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 938/938 [==============================] - 3s 2ms/step - loss: 3.2030 - sparse_categorical_accuracy: 0.8816 Epoch 2/10 938/938 [==============================] - 2s 2ms/step - loss: 0.5115 - sparse_categorical_accuracy: 0.9289 Epoch 3/10 938/938 [==============================] - 2s 2ms/step - loss: 0.3621 - sparse_categorical_accuracy: 0.9460 Epoch 4/10 938/938 [==============================] - 2s 2ms/step - loss: 0.2947 - sparse_categorical_accuracy: 0.9564 Epoch 5/10 938/938 [==============================] - 2s 2ms/step - loss: 0.2568 - sparse_categorical_accuracy: 0.9625 Epoch 6/10 938/938 [==============================] - 2s 2ms/step - loss: 0.2264 - sparse_categorical_accuracy: 0.9658 Epoch 7/10 938/938 [==============================] - 2s 2ms/step - loss: 0.2111 - sparse_categorical_accuracy: 0.9696 Epoch 8/10 938/938 [==============================] - 2s 2ms/step - loss: 0.1898 - sparse_categorical_accuracy: 0.9717 Epoch 9/10 938/938 [==============================] - 2s 2ms/step - loss: 0.1725 - sparse_categorical_accuracy: 0.9746 Epoch 10/10 938/938 [==============================] - 2s 2ms/step - loss: 0.1614 - sparse_categorical_accuracy: 0.9762 <keras.src.callbacks.History at 0x7f31082e6700>
model.evaluate(test_dataset)
157/157 [==============================] - 0s 2ms/step - loss: 0.5400 - sparse_categorical_accuracy: 0.9591 [0.5399600267410278, 0.9591000080108643]