Ten przewodnik trenuje modelu sieci neuronowej do klasyfikacji obrazów ubrania, jak trampki i koszulki , ratuje wyszkolonego model, a następnie zaprasza ją TensorFlow serwowania . Nacisk położony jest na TensorFlow Serving zamiast modelowania i szkolenia w TensorFlow, tak aby uzyskać pełną przykład, który skupia się na modelowaniu i szkolenia patrz przykład Podstawowe klasyfikacyjny .
Podręcznik ten wykorzystuje tf.keras , wysokiego poziomu API do budowania i pociągów modeli TensorFlow.
import sys
# Confirm that we're using Python 3
assert sys.version_info.major == 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
# TensorFlow and tf.keras
print("Installing dependencies for Colab environment")
!pip install -Uq grpcio==1.26.0
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
print('TensorFlow version: {}'.format(tf.__version__))
Stwórz swój model
Zaimportuj zestaw danych Fashion MNIST
Podręcznik ten używa Moda MNIST zbiór danych, który zawiera 70.000 obrazów w skali szarości w 10 kategoriach. Zdjęcia przedstawiają poszczególne artykuły odzieżowe w niskiej rozdzielczości (28 na 28 pikseli), jak widać tutaj:
Rysunek 1. Próbki mody-MNIST (o Zalando MIT prawo jazdy). |
Moda MNIST ma służyć jako zamiennik dla klasycznej MNIST zbiór danych, często wykorzystywane jako „Hello, World” programów uczenia maszynowego dla wizji komputerowej. Możesz uzyskać dostęp do Fashion MNIST bezpośrednio z TensorFlow, wystarczy zaimportować i załadować dane.
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# scale the values to 0.0 to 1.0
train_images = train_images / 255.0
test_images = test_images / 255.0
# reshape for feeding into the model
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz 8192/5148 [===============================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz 4423680/4422102 [==============================] - 0s 0us/step train_images.shape: (60000, 28, 28, 1), of float64 test_images.shape: (10000, 28, 28, 1), of float64
Trenuj i oceniaj swój model
Użyjmy najprostszego możliwego CNN, ponieważ nie skupiamy się na części modelowania.
model = keras.Sequential([
keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
keras.layers.Flatten(),
keras.layers.Dense(10, name='Dense')
])
model.summary()
testing = False
epochs = 5
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(train_images, train_labels, epochs=epochs)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy: {}'.format(test_acc))
2021-12-04 10:29:34.128871: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcusolver.so.10'; dlerror: libcusolver.so.10: cannot open shared object file: No such file or directory 2021-12-04 10:29:34.129907: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= Conv1 (Conv2D) (None, 13, 13, 8) 80 _________________________________________________________________ flatten (Flatten) (None, 1352) 0 _________________________________________________________________ Dense (Dense) (None, 10) 13530 ================================================================= Total params: 13,610 Trainable params: 13,610 Non-trainable params: 0 _________________________________________________________________ Epoch 1/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.7204 - sparse_categorical_accuracy: 0.7549 Epoch 2/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3997 - sparse_categorical_accuracy: 0.8611 Epoch 3/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3580 - sparse_categorical_accuracy: 0.8754 Epoch 4/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3399 - sparse_categorical_accuracy: 0.8780 Epoch 5/5 1875/1875 [==============================] - 4s 2ms/step - loss: 0.3232 - sparse_categorical_accuracy: 0.8849 313/313 [==============================] - 0s 1ms/step - loss: 0.3586 - sparse_categorical_accuracy: 0.8738 Test accuracy: 0.8737999796867371
Zapisz swój model
Aby załadować nasz wyszkolony modelu do TensorFlow Serving najpierw musimy zapisać go w SavedModel formacie. Spowoduje to utworzenie pliku protobuf w dobrze zdefiniowanej hierarchii katalogów i będzie zawierało numer wersji. TensorFlow Serving pozwala nam wybrać, która wersja modelu, czyli „servable” Chcemy wykorzystać kiedy żądań wnioskowania. Każda wersja zostanie wyeksportowana do innego podkatalogu pod podaną ścieżką.
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors,
# and stored with the default serving key
import tempfile
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
print('export_path = {}\n'.format(export_path))
tf.keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
print('\nSaved model:')
!ls -l {export_path}
export_path = /tmp/1 2021-12-04 10:29:53.392905: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. INFO:tensorflow:Assets written to: /tmp/1/assets Saved model: total 88 drwxr-xr-x 2 kbuilder kbuilder 4096 Dec 4 10:29 assets -rw-rw-r-- 1 kbuilder kbuilder 78055 Dec 4 10:29 saved_model.pb drwxr-xr-x 2 kbuilder kbuilder 4096 Dec 4 10:29 variables
Sprawdź swój zapisany model
Użyjemy polecenia narzędzia wiersza saved_model_cli
spojrzeć na MetaGraphDefs (The modeli) i SignatureDefs (metody można nazwać) w naszej SavedModel. Zobacz tę dyskusję na SavedModel CLI w TensorFlow Guide.
saved_model_cli show --dir {export_path} --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['Conv1_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 28, 28, 1) name: serving_default_Conv1_input:0 The given SavedModel SignatureDef contains the following output(s): outputs['Dense'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict Defined Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None
To wiele mówi nam o naszym modelu! W tym przypadku właśnie wytrenowaliśmy nasz model, więc znamy już dane wejściowe i wyjściowe, ale gdybyśmy tego nie zrobili, byłaby to ważna informacja. Nie mówi nam wszystkiego, na przykład o tym, że są to dane obrazu w skali szarości, ale to świetny początek.
Obsługuj swój model za pomocą TensorFlow Serving
Dodaj URI dystrybucji TensorFlow Serving jako źródło pakietu:
Przygotowujemy się do zainstalowania TensorFlow Serving używając Aptitude ponieważ Colab przebiega w środowisku Debian. Dodamy tensorflow-model-server
pakiet z listy pakietów Aptitude o nich wiedział. Zauważ, że działamy jako root.
import sys
# We need sudo prefix if not on a Google Colab.
if 'google.colab' not in sys.modules:
SUDO_IF_NEEDED = 'sudo'
else:
SUDO_IF_NEEDED = ''
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | {SUDO_IF_NEEDED} tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | {SUDO_IF_NEEDED} apt-key add -
!{SUDO_IF_NEEDED} apt update
deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2943 100 2943 0 0 15571 0 --:--:-- --:--:-- --:--:-- 15571 OK Hit:1 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic InRelease Hit:2 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:3 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:4 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/amd64 InRelease Get:5 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64 InRelease [1481 B] Get:6 https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64 InRelease [1474 B] Ign:7 http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Get:8 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3012 B] Hit:9 http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Get:10 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Get:11 https://packages.cloud.google.com/apt eip-cloud-bionic InRelease [5419 B] Get:12 http://packages.cloud.google.com/apt google-cloud-logging-wheezy InRelease [5483 B] Hit:13 http://archive.canonical.com/ubuntu bionic InRelease Err:11 https://packages.cloud.google.com/apt eip-cloud-bionic InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB Get:15 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [339 B] Err:12 http://packages.cloud.google.com/apt google-cloud-logging-wheezy InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB Get:16 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [348 B] Fetched 106 kB in 1s (103 kB/s) 119 packages can be upgraded. Run 'apt list --upgradable' to see them. W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.cloud.google.com/apt eip-cloud-bionic InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt google-cloud-logging-wheezy InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: Failed to fetch https://packages.cloud.google.com/apt/dists/eip-cloud-bionic/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: Failed to fetch http://packages.cloud.google.com/apt/dists/google-cloud-logging-wheezy/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FEEA9169307EA071 NO_PUBKEY 8B57C5C2836F4BEB W: Some index files failed to download. They have been ignored, or old ones used instead.
Zainstaluj obsługę TensorFlow
To wszystko, czego potrzebujesz - jedna linia poleceń!
{SUDO_IF_NEEDED} apt-get install tensorflow-model-server
The following packages were automatically installed and are no longer required: linux-gcp-5.4-headers-5.4.0-1040 linux-gcp-5.4-headers-5.4.0-1043 linux-gcp-5.4-headers-5.4.0-1044 linux-gcp-5.4-headers-5.4.0-1049 Use 'sudo apt autoremove' to remove them. The following NEW packages will be installed: tensorflow-model-server 0 upgraded, 1 newly installed, 0 to remove and 119 not upgraded. Need to get 335 MB of archives. After this operation, 0 B of additional disk space will be used. Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.7.0 [335 MB] Fetched 335 MB in 7s (45.2 MB/s) Selecting previously unselected package tensorflow-model-server. (Reading database ... 264341 files and directories currently installed.) Preparing to unpack .../tensorflow-model-server_2.7.0_all.deb ... Unpacking tensorflow-model-server (2.7.0) ... Setting up tensorflow-model-server (2.7.0) ...
Rozpocznij udostępnianie TensorFlow
Tutaj zaczynamy uruchamiać TensorFlow Serving i ładować nasz model. Po załadowaniu możemy rozpocząć wykonywanie żądań wnioskowania za pomocą REST. Istnieje kilka ważnych parametrów:
-
rest_api_port
: port, który będzie używany dla wniosków o resztę. -
model_name
: Będziesz korzystać z tego w adresie URL żądania REST. Może to być wszystko. -
model_base_path
: Jest to ścieżka do katalogu, w którym został zapisany model.
os.environ["MODEL_DIR"] = MODEL_DIR
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
tail server.log
Złóż wniosek do swojego modelu w TensorFlow Serving
Najpierw spójrzmy na losowy przykład z naszych danych testowych.
def show(idx, title):
plt.figure()
plt.imshow(test_images[idx].reshape(28,28))
plt.axis('off')
plt.title('\n\n{}'.format(title), fontdict={'size': 16})
import random
rando = random.randint(0,len(test_images)-1)
show(rando, 'An Example Image: {}'.format(class_names[test_labels[rando]]))
Ok, to wygląda interesująco. Jak trudno ci to rozpoznać? Teraz utwórzmy obiekt JSON dla partii trzech żądań wnioskowania i zobaczmy, jak dobrze nasz model rozpoznaje rzeczy:
import json
data = json.dumps({"signature_name": "serving_default", "instances": test_images[0:3].tolist()})
print('Data: {} ... {}'.format(data[:50], data[len(data)-52:]))
Data: {"signature_name": "serving_default", "instances": ... [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]]]}
Zgłaszaj żądania REST
Najnowsza wersja serwable
Wyślemy żądanie przewidywania jako POST do punktu końcowego REST naszego serwera i przekażemy mu trzy przykłady. Poprosimy nasz serwer, aby udostępnił nam najnowszą wersję naszego serwera, bez określania konkretnej wersji.
# docs_infra: no_execute
!pip install -q requests
import requests
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
show(0, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[0])], np.argmax(predictions[0]), class_names[test_labels[0]], test_labels[0]))
Konkretna wersja serwowalnego
Teraz określmy konkretną wersję naszego serwowalnego. Ponieważ mamy tylko jeden, wybierzmy wersję 1. Przyjrzymy się również wszystkim trzem wynikom.
# docs_infra: no_execute
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model/versions/1:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
for i in range(0,3):
show(i, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[i])], np.argmax(predictions[i]), class_names[test_labels[i]], test_labels[i]))