Ver en TensorFlow.org | Ejecutar en Google Colab | Ver fuente en GitHub | Descargar libreta |
Este tutorial muestra dos formas de cargar y preprocesar texto.
- Primero, utilizará las utilidades de Keras y las capas de preprocesamiento. Estos incluyen
tf.keras.utils.text_dataset_from_directory
para convertir los datos en untf.data.Dataset
ytf.keras.layers.TextVectorization
para la estandarización, tokenización y vectorización de datos. Si es nuevo en TensorFlow, debe comenzar con estos. - Luego, usará utilidades de nivel inferior como
tf.data.TextLineDataset
para cargar archivos de texto y las API de texto de TensorFlow , comotext.UnicodeScriptTokenizer
ytext.case_fold_utf8
, para preprocesar los datos para un control más detallado.
# Be sure you're using the stable versions of both `tensorflow` and
# `tensorflow-text`, for binary compatibility.
pip uninstall -y tf-nightly keras-nightly
pip install tensorflow
pip install tensorflow-text
import collections
import pathlib
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import losses
from tensorflow.keras import utils
from tensorflow.keras.layers import TextVectorization
import tensorflow_datasets as tfds
import tensorflow_text as tf_text
Ejemplo 1: predecir la etiqueta para una pregunta de desbordamiento de pila
Como primer ejemplo, descargará un conjunto de datos de preguntas de programación de Stack Overflow. Cada pregunta ( "¿Cómo ordeno un diccionario por valor?" ) está etiquetada con exactamente una etiqueta ( Python
, CSharp
, JavaScript
o Java
). Su tarea es desarrollar un modelo que prediga la etiqueta de una pregunta. Este es un ejemplo de clasificación multiclase, un tipo de problema de aprendizaje automático importante y ampliamente aplicable.
Descargar y explorar el conjunto de datos
Comience descargando el conjunto de datos de Stack Overflow usando tf.keras.utils.get_file
y explorando la estructura del directorio:
data_url = 'https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz'
dataset_dir = utils.get_file(
origin=data_url,
untar=True,
cache_dir='stack_overflow',
cache_subdir='')
dataset_dir = pathlib.Path(dataset_dir).parent
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz 6053888/6053168 [==============================] - 0s 0us/step 6062080/6053168 [==============================] - 0s 0us/step
list(dataset_dir.iterdir())
[PosixPath('/tmp/.keras/train'), PosixPath('/tmp/.keras/README.md'), PosixPath('/tmp/.keras/stack_overflow_16k.tar.gz'), PosixPath('/tmp/.keras/test')]
train_dir = dataset_dir/'train'
list(train_dir.iterdir())
[PosixPath('/tmp/.keras/train/java'), PosixPath('/tmp/.keras/train/csharp'), PosixPath('/tmp/.keras/train/javascript'), PosixPath('/tmp/.keras/train/python')]
Los train/csharp
, train/java
, train/python
y train/javascript
contienen muchos archivos de texto, cada uno de los cuales es una pregunta de desbordamiento de pila.
Imprima un archivo de ejemplo e inspeccione los datos:
sample_file = train_dir/'python/1755.txt'
with open(sample_file) as f:
print(f.read())
why does this blank program print true x=true.def stupid():. x=false.stupid().print x
Cargue el conjunto de datos
A continuación, cargará los datos del disco y los preparará en un formato adecuado para el entrenamiento. Para hacerlo, utilizará la utilidad tf.keras.utils.text_dataset_from_directory
para crear un tf.data.Dataset
etiquetado. Si es nuevo en tf.data
, es una poderosa colección de herramientas para construir canalizaciones de entrada. (Obtenga más información en la guía tf.data: Build TensorFlow input pipelines ).
La API tf.keras.utils.text_dataset_from_directory
espera una estructura de directorios de la siguiente manera:
train/
...csharp/
......1.txt
......2.txt
...java/
......1.txt
......2.txt
...javascript/
......1.txt
......2.txt
...python/
......1.txt
......2.txt
Al ejecutar un experimento de aprendizaje automático, se recomienda dividir el conjunto de datos en tres partes: entrenamiento , validación y prueba .
El conjunto de datos de Stack Overflow ya se ha dividido en conjuntos de entrenamiento y prueba, pero carece de un conjunto de validación.
Cree un conjunto de validación usando una división 80:20 de los datos de entrenamiento usando tf.keras.utils.text_dataset_from_directory
con validation_split
establecido en 0.2
(es decir, 20%):
batch_size = 32
seed = 42
raw_train_ds = utils.text_dataset_from_directory(
train_dir,
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
Found 8000 files belonging to 4 classes. Using 6400 files for training.
Como sugiere el resultado de la celda anterior, hay 8000 ejemplos en la carpeta de capacitación, de los cuales utilizará el 80 % (o 6400) para la capacitación. Aprenderá en un momento que puede entrenar un modelo pasando un tf.data.Dataset
directamente a Model.fit
.
Primero, itere sobre el conjunto de datos e imprima algunos ejemplos para tener una idea de los datos.
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(10):
print("Question: ", text_batch.numpy()[i])
print("Label:", label_batch.numpy()[i])
Question: b'"my tester is going to the wrong constructor i am new to programming so if i ask a question that can be easily fixed, please forgive me. my program has a tester class with a main. when i send that to my regularpolygon class, it sends it to the wrong constructor. i have two constructors. 1 without perameters..public regularpolygon(). {. mynumsides = 5;. mysidelength = 30;. }//end default constructor...and my second, with perameters. ..public regularpolygon(int numsides, double sidelength). {. mynumsides = numsides;. mysidelength = sidelength;. }// end constructor...in my tester class i have these two lines:..regularpolygon shape = new regularpolygon(numsides, sidelength);. shape.menu();...numsides and sidelength were declared and initialized earlier in the testing class...so what i want to happen, is the tester class sends numsides and sidelength to the second constructor and use it in that class. but it only uses the default constructor, which therefor ruins the whole rest of the program. can somebody help me?..for those of you who want to see more of my code: here you go..public double vertexangle(). {. system.out.println(""the vertex angle method: "" + mynumsides);// prints out 5. system.out.println(""the vertex angle method: "" + mysidelength); // prints out 30.. double vertexangle;. vertexangle = ((mynumsides - 2.0) / mynumsides) * 180.0;. return vertexangle;. }//end method vertexangle..public void menu().{. system.out.println(mynumsides); // prints out what the user puts in. system.out.println(mysidelength); // prints out what the user puts in. gotographic();. calcr(mynumsides, mysidelength);. calcr(mynumsides, mysidelength);. print(); .}// end menu...this is my entire tester class:..public static void main(string[] arg).{. int numsides;. double sidelength;. scanner keyboard = new scanner(system.in);.. system.out.println(""welcome to the regular polygon program!"");. system.out.println();.. system.out.print(""enter the number of sides of the polygon ==> "");. numsides = keyboard.nextint();. system.out.println();.. system.out.print(""enter the side length of each side ==> "");. sidelength = keyboard.nextdouble();. system.out.println();.. regularpolygon shape = new regularpolygon(numsides, sidelength);. shape.menu();.}//end main...for testing it i sent it numsides 4 and sidelength 100."\n' Label: 1 Question: b'"blank code slow skin detection this code changes the color space to lab and using a threshold finds the skin area of an image. but it\'s ridiculously slow. i don\'t know how to make it faster ? ..from colormath.color_objects import *..def skindetection(img, treshold=80, color=[255,20,147]):.. print img.shape. res=img.copy(). for x in range(img.shape[0]):. for y in range(img.shape[1]):. rgbimg=rgbcolor(img[x,y,0],img[x,y,1],img[x,y,2]). labimg=rgbimg.convert_to(\'lab\', debug=false). if (labimg.lab_l > treshold):. res[x,y,:]=color. else: . res[x,y,:]=img[x,y,:].. return res"\n' Label: 3 Question: b'"option and validation in blank i want to add a new option on my system where i want to add two text files, both rental.txt and customer.txt. inside each text are id numbers of the customer, the videotape they need and the price...i want to place it as an option on my code. right now i have:...add customer.rent return.view list.search.exit...i want to add this as my sixth option. say for example i ordered a video, it would display the price and would let me confirm the price and if i am going to buy it or not...here is my current code:.. import blank.io.*;. import blank.util.arraylist;. import static blank.lang.system.out;.. public class rentalsystem{. static bufferedreader input = new bufferedreader(new inputstreamreader(system.in));. static file file = new file(""file.txt"");. static arraylist<string> list = new arraylist<string>();. static int rows;.. public static void main(string[] args) throws exception{. introduction();. system.out.print(""nn"");. login();. system.out.print(""nnnnnnnnnnnnnnnnnnnnnn"");. introduction();. string repeat;. do{. loadfile();. system.out.print(""nwhat do you want to do?nn"");. system.out.print(""n - - - - - - - - - - - - - - - - - - - - - - -"");. system.out.print(""nn | 1. add customer | 2. rent return |n"");. system.out.print(""n - - - - - - - - - - - - - - - - - - - - - - -"");. system.out.print(""nn | 3. view list | 4. search |n"");. system.out.print(""n - - - - - - - - - - - - - - - - - - - - - - -"");. system.out.print(""nn | 5. exit |n"");. system.out.print(""n - - - - - - - - - -"");. system.out.print(""nnchoice:"");. int choice = integer.parseint(input.readline());. switch(choice){. case 1:. writedata();. break;. case 2:. rentdata();. break;. case 3:. viewlist();. break;. case 4:. search();. break;. case 5:. system.out.println(""goodbye!"");. system.exit(0);. default:. system.out.print(""invalid choice: "");. break;. }. system.out.print(""ndo another task? [y/n] "");. repeat = input.readline();. }while(repeat.equals(""y""));.. if(repeat!=""y"") system.out.println(""ngoodbye!"");.. }.. public static void writedata() throws exception{. system.out.print(""nname: "");. string cname = input.readline();. system.out.print(""address: "");. string add = input.readline();. system.out.print(""phone no.: "");. string pno = input.readline();. system.out.print(""rental amount: "");. string ramount = input.readline();. system.out.print(""tapenumber: "");. string tno = input.readline();. system.out.print(""title: "");. string title = input.readline();. system.out.print(""date borrowed: "");. string dborrowed = input.readline();. system.out.print(""due date: "");. string ddate = input.readline();. createline(cname, add, pno, ramount,tno, title, dborrowed, ddate);. rentdata();. }.. public static void createline(string name, string address, string phone , string rental, string tapenumber, string title, string borrowed, string due) throws exception{. filewriter fw = new filewriter(file, true);. fw.write(""nname: ""+name + ""naddress: "" + address +""nphone no.: ""+ phone+""nrentalamount: ""+rental+""ntape no.: ""+ tapenumber+""ntitle: ""+ title+""ndate borrowed: ""+borrowed +""ndue date: ""+ due+"":rn"");. fw.close();. }.. public static void loadfile() throws exception{. try{. list.clear();. fileinputstream fstream = new fileinputstream(file);. bufferedreader br = new bufferedreader(new inputstreamreader(fstream));. rows = 0;. while( br.ready()). {. list.add(br.readline());. rows++;. }. br.close();. } catch(exception e){. system.out.println(""list not yet loaded."");. }. }.. public static void viewlist(){. system.out.print(""n~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~"");. system.out.print("" |list of all costumers|"");. system.out.print(""~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~"");. for(int i = 0; i <rows; i++){. system.out.println(list.get(i));. }. }. public static void rentdata()throws exception. { system.out.print(""n~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~"");. system.out.print("" |rent data list|"");. system.out.print(""~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~"");. system.out.print(""nenter customer name: "");. string cname = input.readline();. system.out.print(""date borrowed: "");. string dborrowed = input.readline();. system.out.print(""due date: "");. string ddate = input.readline();. system.out.print(""return date: "");. string rdate = input.readline();. system.out.print(""rent amount: "");. string ramount = input.readline();.. system.out.print(""you pay:""+ramount);... }. public static void search()throws exception. { system.out.print(""n~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~"");. system.out.print("" |search costumers|"");. system.out.print(""~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~"");. system.out.print(""nenter costumer name: "");. string cname = input.readline();. boolean found = false;.. for(int i=0; i < rows; i++){. string temp[] = list.get(i).split("","");.. if(cname.equals(temp[0])){. system.out.println(""search result:nyou are "" + temp[0] + "" from "" + temp[1] + "".""+ temp[2] + "".""+ temp[3] + "".""+ temp[4] + "".""+ temp[5] + "" is "" + temp[6] + "".""+ temp[7] + "" is "" + temp[8] + ""."");. found = true;. }. }.. if(!found){. system.out.print(""no results."");. }.. }.. public static boolean evaluate(string uname, string pass){. if (uname.equals(""admin"")&&pass.equals(""12345"")) return true;. else return false;. }.. public static string login()throws exception{. bufferedreader input=new bufferedreader(new inputstreamreader(system.in));. int counter=0;. do{. system.out.print(""username:"");. string uname =input.readline();. system.out.print(""password:"");. string pass =input.readline();.. boolean accept= evaluate(uname,pass);.. if(accept){. break;. }else{. system.out.println(""incorrect username or password!"");. counter ++;. }. }while(counter<3);.. if(counter !=3) return ""login successful"";. else return ""login failed"";. }. public static void introduction() throws exception{.. system.out.println("" - - - - - - - - - - - - - - - - - - - - - - - - -"");. system.out.println("" ! r e n t a l !"");. system.out.println("" ! ~ ~ ~ ~ ~ ! ================= ! ~ ~ ~ ~ ~ !"");. system.out.println("" ! s y s t e m !"");. system.out.println("" - - - - - - - - - - - - - - - - - - - - - - - - -"");. }..}"\n' Label: 1 Question: b'"exception: dynamic sql generation for the updatecommand is not supported against a selectcommand that does not return any key i dont know what is the problem this my code : ..string nomtable;..datatable listeetablissementtable = new datatable();.datatable listeinteretstable = new datatable();.dataset ds = new dataset();.sqldataadapter da;.sqlcommandbuilder cmdb;..private void listeinterets_click(object sender, eventargs e).{. nomtable = ""listeinteretstable"";. d.cnx.open();. da = new sqldataadapter(""select nome from offices"", d.cnx);. ds = new dataset();. da.fill(ds, nomtable);. datagridview1.datasource = ds.tables[nomtable];.}..private void sauvgarder_click(object sender, eventargs e).{. d.cnx.open();. cmdb = new sqlcommandbuilder(da);. da.update(ds, nomtable);. d.cnx.close();.}"\n' Label: 0 Question: b'"parameter with question mark and super in blank, i\'ve come across a method that is formatted like this:..public final subscription subscribe(final action1<? super t> onnext, final action1<throwable> onerror) {.}...in the first parameter, what does the question mark and super mean?"\n' Label: 1 Question: b'call two objects wsdl the first time i got a very strange wsdl. ..i would like to call the object (interface - invoicecheck_out) do you know how?....i would like to call the object (variable) do you know how?..try to call (it`s ok)....try to call (how call this?)\n' Label: 0 Question: b"how to correctly make the icon for systemtray in blank using icon sizes of any dimension for systemtray doesn't look good overall. .what is the correct way of making icons for windows system tray?..screenshots: http://imgur.com/zsibwn9..icon: http://imgur.com/vsh4zo8\n" Label: 0 Question: b'"is there a way to check a variable that exists in a different script than the original one? i\'m trying to check if a variable, which was previously set to true in 2.py in 1.py, as 1.py is only supposed to continue if the variable is true...2.py..import os..completed = false..#some stuff here..completed = true...1.py..import 2 ..if completed == true. #do things...however i get a syntax error at ..if completed == true"\n' Label: 3 Question: b'"blank control flow i made a number which asks for 2 numbers with blank and responds with the corresponding message for the case. how come it doesnt work for the second number ? .regardless what i enter for the second number , i am getting the message ""your number is in the range 0-10""...using system;.using system.collections.generic;.using system.linq;.using system.text;..namespace consoleapplication1.{. class program. {. static void main(string[] args). {. string myinput; // declaring the type of the variables. int myint;.. string number1;. int number;... console.writeline(""enter a number"");. myinput = console.readline(); //muyinput is a string which is entry input. myint = int32.parse(myinput); // myint converts the string into an integer.. if (myint > 0). console.writeline(""your number {0} is greater than zero."", myint);. else if (myint < 0). console.writeline(""your number {0} is less than zero."", myint);. else. console.writeline(""your number {0} is equal zero."", myint);.. console.writeline(""enter another number"");. number1 = console.readline(); . number = int32.parse(myinput); .. if (number < 0 || number == 0). console.writeline(""your number {0} is less than zero or equal zero."", number);. else if (number > 0 && number <= 10). console.writeline(""your number {0} is in the range from 0 to 10."", number);. else. console.writeline(""your number {0} is greater than 10."", number);.. console.writeline(""enter another number"");.. }. } .}"\n' Label: 0 Question: b'"credentials cannot be used for ntlm authentication i am getting org.apache.commons.httpclient.auth.invalidcredentialsexception: credentials cannot be used for ntlm authentication: exception in eclipse..whether it is possible mention eclipse to take system proxy settings directly?..public class httpgetproxy {. private static final string proxy_host = ""proxy.****.com"";. private static final int proxy_port = 6050;.. public static void main(string[] args) {. httpclient client = new httpclient();. httpmethod method = new getmethod(""https://kodeblank.org"");.. hostconfiguration config = client.gethostconfiguration();. config.setproxy(proxy_host, proxy_port);.. string username = ""*****"";. string password = ""*****"";. credentials credentials = new usernamepasswordcredentials(username, password);. authscope authscope = new authscope(proxy_host, proxy_port);.. client.getstate().setproxycredentials(authscope, credentials);.. try {. client.executemethod(method);.. if (method.getstatuscode() == httpstatus.sc_ok) {. string response = method.getresponsebodyasstring();. system.out.println(""response = "" + response);. }. } catch (ioexception e) {. e.printstacktrace();. } finally {. method.releaseconnection();. }. }.}...exception:... dec 08, 2017 1:41:39 pm . org.apache.commons.httpclient.auth.authchallengeprocessor selectauthscheme. info: ntlm authentication scheme selected. dec 08, 2017 1:41:39 pm org.apache.commons.httpclient.httpmethoddirector executeconnect. severe: credentials cannot be used for ntlm authentication: . org.apache.commons.httpclient.usernamepasswordcredentials. org.apache.commons.httpclient.auth.invalidcredentialsexception: credentials . cannot be used for ntlm authentication: . enter code here . org.apache.commons.httpclient.usernamepasswordcredentials. at org.apache.commons.httpclient.auth.ntlmscheme.authenticate(ntlmscheme.blank:332). at org.apache.commons.httpclient.httpmethoddirector.authenticateproxy(httpmethoddirector.blank:320). at org.apache.commons.httpclient.httpmethoddirector.executeconnect(httpmethoddirector.blank:491). at org.apache.commons.httpclient.httpmethoddirector.executewithretry(httpmethoddirector.blank:391). at org.apache.commons.httpclient.httpmethoddirector.executemethod(httpmethoddirector.blank:171). at org.apache.commons.httpclient.httpclient.executemethod(httpclient.blank:397). at org.apache.commons.httpclient.httpclient.executemethod(httpclient.blank:323). at httpgetproxy.main(httpgetproxy.blank:31). dec 08, 2017 1:41:39 pm org.apache.commons.httpclient.httpmethoddirector processproxyauthchallenge. info: failure authenticating with ntlm @proxy.****.com:6050"\n' Label: 1
Las etiquetas son 0
, 1
, 2
o 3
. Para verificar cuál de estos corresponde a qué etiqueta de cadena, puede inspeccionar la propiedad class_names
en el conjunto de datos:
for i, label in enumerate(raw_train_ds.class_names):
print("Label", i, "corresponds to", label)
Label 0 corresponds to csharp Label 1 corresponds to java Label 2 corresponds to javascript Label 3 corresponds to python
A continuación, creará una validación y un conjunto de pruebas mediante tf.keras.utils.text_dataset_from_directory
. Utilizará las 1600 revisiones restantes del conjunto de capacitación para la validación.
# Create a validation set.
raw_val_ds = utils.text_dataset_from_directory(
train_dir,
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
Found 8000 files belonging to 4 classes. Using 1600 files for validation.
test_dir = dataset_dir/'test'
# Create a test set.
raw_test_ds = utils.text_dataset_from_directory(
test_dir,
batch_size=batch_size)
Found 8000 files belonging to 4 classes.
Preparar el conjunto de datos para el entrenamiento
A continuación, estandarizará, tokenizará y vectorizará los datos mediante la capa tf.keras.layers.TextVectorization
.
- La estandarización se refiere al preprocesamiento del texto, generalmente para eliminar la puntuación o elementos HTML para simplificar el conjunto de datos.
- La tokenización se refiere a dividir cadenas en tokens (por ejemplo, dividir una oración en palabras individuales dividiéndolas en espacios en blanco).
- La vectorización se refiere a convertir tokens en números para que puedan alimentar una red neuronal.
Todas estas tareas se pueden lograr con esta capa. (Puede obtener más información sobre cada uno de estos en los documentos de la API tf.keras.layers.TextVectorization
).
Tenga en cuenta que:
- La estandarización predeterminada convierte el texto a minúsculas y elimina la puntuación (
standardize='lower_and_strip_punctuation'
). - El tokenizador predeterminado se divide en espacios en blanco (
split='whitespace'
). - El modo de vectorización predeterminado es
'int'
(output_mode='int'
). Esto genera índices enteros (uno por token). Este modo se puede utilizar para crear modelos que tengan en cuenta el orden de las palabras. También puede usar otros modos, como'binary'
para crear modelos de bolsa de palabras .
Construirá dos modelos para obtener más información sobre la estandarización, tokenización y vectorización con TextVectorization
:
- Primero, usará el modo de vectorización
'binary'
para construir un modelo de bolsa de palabras. - Luego, utilizará el modo
'int'
con una ConvNet 1D.
VOCAB_SIZE = 10000
binary_vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='binary')
Para el modo 'int'
, además del tamaño máximo de vocabulario, debe establecer una longitud de secuencia máxima explícita ( MAX_SEQUENCE_LENGTH
), lo que hará que la capa rellene o trunque las secuencias a valores exactos de longitud de secuencia de output_sequence_length
:
MAX_SEQUENCE_LENGTH = 250
int_vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
A continuación, llame a TextVectorization.adapt
para ajustar el estado de la capa de preprocesamiento al conjunto de datos. Esto hará que el modelo genere un índice de cadenas a números enteros.
# Make a text-only dataset (without labels), then call `TextVectorization.adapt`.
train_text = raw_train_ds.map(lambda text, labels: text)
binary_vectorize_layer.adapt(train_text)
int_vectorize_layer.adapt(train_text)
Imprima el resultado de usar estas capas para preprocesar datos:
def binary_vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return binary_vectorize_layer(text), label
def int_vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return int_vectorize_layer(text), label
# Retrieve a batch (of 32 reviews and labels) from the dataset.
text_batch, label_batch = next(iter(raw_train_ds))
first_question, first_label = text_batch[0], label_batch[0]
print("Question", first_question)
print("Label", first_label)
Question tf.Tensor(b'"what is the difference between these two ways to create an element? var a = document.createelement(\'div\');..a.id = ""mydiv"";...and..var a = document.createelement(\'div\').id = ""mydiv"";...what is the difference between them such that the first one works and the second one doesn\'t?"\n', shape=(), dtype=string) Label tf.Tensor(2, shape=(), dtype=int32)
print("'binary' vectorized question:",
binary_vectorize_text(first_question, first_label)[0])
'binary' vectorized question: tf.Tensor([[1. 1. 0. ... 0. 0. 0.]], shape=(1, 10000), dtype=float32)
print("'int' vectorized question:",
int_vectorize_text(first_question, first_label)[0])
'int' vectorized question: tf.Tensor( [[ 55 6 2 410 211 229 121 895 4 124 32 245 43 5 1 1 5 1 1 6 2 410 211 191 318 14 2 98 71 188 8 2 199 71 178 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]], shape=(1, 250), dtype=int64)
Como se muestra arriba, el modo 'binary'
de TextVectorization
devuelve una matriz que indica qué tokens existen al menos una vez en la entrada, mientras que el modo 'int'
reemplaza cada token por un número entero, preservando así su orden.
Puede buscar el token (cadena) al que corresponde cada entero llamando a TextVectorization.get_vocabulary
en la capa:
print("1289 ---> ", int_vectorize_layer.get_vocabulary()[1289])
print("313 ---> ", int_vectorize_layer.get_vocabulary()[313])
print("Vocabulary size: {}".format(len(int_vectorize_layer.get_vocabulary())))
1289 ---> roman 313 ---> source Vocabulary size: 10000
Está casi listo para entrenar a su modelo.
Como paso final de preprocesamiento, aplicará las capas de TextVectorization
que creó anteriormente a los conjuntos de entrenamiento, validación y prueba:
binary_train_ds = raw_train_ds.map(binary_vectorize_text)
binary_val_ds = raw_val_ds.map(binary_vectorize_text)
binary_test_ds = raw_test_ds.map(binary_vectorize_text)
int_train_ds = raw_train_ds.map(int_vectorize_text)
int_val_ds = raw_val_ds.map(int_vectorize_text)
int_test_ds = raw_test_ds.map(int_vectorize_text)
Configurar el conjunto de datos para el rendimiento
Estos son dos métodos importantes que debe usar al cargar datos para asegurarse de que la E/S no se bloquee.
-
Dataset.cache
mantiene los datos en la memoria después de que se cargan fuera del disco. Esto asegurará que el conjunto de datos no se convierta en un cuello de botella mientras entrena su modelo. Si su conjunto de datos es demasiado grande para caber en la memoria, también puede usar este método para crear un caché en disco de alto rendimiento, que es más eficiente para leer que muchos archivos pequeños. -
Dataset.prefetch
superpone el preprocesamiento de datos y la ejecución del modelo durante el entrenamiento.
Puede obtener más información sobre ambos métodos, así como sobre cómo almacenar datos en caché en el disco, en la sección Precarga de la guía Mejor rendimiento con la API tf.data .
AUTOTUNE = tf.data.AUTOTUNE
def configure_dataset(dataset):
return dataset.cache().prefetch(buffer_size=AUTOTUNE)
binary_train_ds = configure_dataset(binary_train_ds)
binary_val_ds = configure_dataset(binary_val_ds)
binary_test_ds = configure_dataset(binary_test_ds)
int_train_ds = configure_dataset(int_train_ds)
int_val_ds = configure_dataset(int_val_ds)
int_test_ds = configure_dataset(int_test_ds)
entrenar al modelo
Es hora de crear tu red neuronal.
Para los datos vectorizados 'binary'
, defina un modelo lineal simple de bolsa de palabras, luego configúrelo y entrénelo:
binary_model = tf.keras.Sequential([layers.Dense(4)])
binary_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = binary_model.fit(
binary_train_ds, validation_data=binary_val_ds, epochs=10)
Epoch 1/10 200/200 [==============================] - 2s 4ms/step - loss: 1.1170 - accuracy: 0.6509 - val_loss: 0.9165 - val_accuracy: 0.7844 Epoch 2/10 200/200 [==============================] - 1s 3ms/step - loss: 0.7781 - accuracy: 0.8169 - val_loss: 0.7522 - val_accuracy: 0.8050 Epoch 3/10 200/200 [==============================] - 1s 3ms/step - loss: 0.6274 - accuracy: 0.8591 - val_loss: 0.6664 - val_accuracy: 0.8163 Epoch 4/10 200/200 [==============================] - 1s 3ms/step - loss: 0.5342 - accuracy: 0.8866 - val_loss: 0.6129 - val_accuracy: 0.8188 Epoch 5/10 200/200 [==============================] - 1s 3ms/step - loss: 0.4683 - accuracy: 0.9038 - val_loss: 0.5761 - val_accuracy: 0.8281 Epoch 6/10 200/200 [==============================] - 1s 3ms/step - loss: 0.4181 - accuracy: 0.9181 - val_loss: 0.5494 - val_accuracy: 0.8331 Epoch 7/10 200/200 [==============================] - 1s 3ms/step - loss: 0.3779 - accuracy: 0.9287 - val_loss: 0.5293 - val_accuracy: 0.8388 Epoch 8/10 200/200 [==============================] - 1s 3ms/step - loss: 0.3446 - accuracy: 0.9361 - val_loss: 0.5137 - val_accuracy: 0.8400 Epoch 9/10 200/200 [==============================] - 1s 3ms/step - loss: 0.3164 - accuracy: 0.9430 - val_loss: 0.5014 - val_accuracy: 0.8381 Epoch 10/10 200/200 [==============================] - 1s 3ms/step - loss: 0.2920 - accuracy: 0.9495 - val_loss: 0.4916 - val_accuracy: 0.8388
A continuación, utilizará la capa vectorizada 'int'
para construir una ConvNet 1D:
def create_model(vocab_size, num_labels):
model = tf.keras.Sequential([
layers.Embedding(vocab_size, 64, mask_zero=True),
layers.Conv1D(64, 5, padding="valid", activation="relu", strides=2),
layers.GlobalMaxPooling1D(),
layers.Dense(num_labels)
])
return model
# `vocab_size` is `VOCAB_SIZE + 1` since `0` is used additionally for padding.
int_model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=4)
int_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = int_model.fit(int_train_ds, validation_data=int_val_ds, epochs=5)
Epoch 1/5 200/200 [==============================] - 9s 5ms/step - loss: 1.1471 - accuracy: 0.5016 - val_loss: 0.7856 - val_accuracy: 0.6913 Epoch 2/5 200/200 [==============================] - 1s 3ms/step - loss: 0.6378 - accuracy: 0.7550 - val_loss: 0.5494 - val_accuracy: 0.8056 Epoch 3/5 200/200 [==============================] - 1s 3ms/step - loss: 0.3900 - accuracy: 0.8764 - val_loss: 0.4845 - val_accuracy: 0.8206 Epoch 4/5 200/200 [==============================] - 1s 3ms/step - loss: 0.2234 - accuracy: 0.9447 - val_loss: 0.4819 - val_accuracy: 0.8188 Epoch 5/5 200/200 [==============================] - 1s 3ms/step - loss: 0.1146 - accuracy: 0.9809 - val_loss: 0.5038 - val_accuracy: 0.8150
Compara los dos modelos:
print("Linear model on binary vectorized data:")
print(binary_model.summary())
Linear model on binary vectorized data: Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 4) 40004 ================================================================= Total params: 40,004 Trainable params: 40,004 Non-trainable params: 0 _________________________________________________________________ None
print("ConvNet model on int vectorized data:")
print(int_model.summary())
ConvNet model on int vectorized data: Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 64) 640064 conv1d (Conv1D) (None, None, 64) 20544 global_max_pooling1d (Globa (None, 64) 0 lMaxPooling1D) dense_1 (Dense) (None, 4) 260 ================================================================= Total params: 660,868 Trainable params: 660,868 Non-trainable params: 0 _________________________________________________________________ None
Evalúe ambos modelos en los datos de prueba:
binary_loss, binary_accuracy = binary_model.evaluate(binary_test_ds)
int_loss, int_accuracy = int_model.evaluate(int_test_ds)
print("Binary model accuracy: {:2.2%}".format(binary_accuracy))
print("Int model accuracy: {:2.2%}".format(int_accuracy))
250/250 [==============================] - 1s 3ms/step - loss: 0.5178 - accuracy: 0.8151 250/250 [==============================] - 1s 2ms/step - loss: 0.5262 - accuracy: 0.8073 Binary model accuracy: 81.51% Int model accuracy: 80.73%
Exportar el modelo
En el código anterior, aplicó tf.keras.layers.TextVectorization
al conjunto de datos antes de enviar texto al modelo. Si desea que su modelo sea capaz de procesar cadenas sin formato (por ejemplo, para simplificar su implementación), puede incluir la capa TextVectorization
dentro de su modelo.
Para ello, puedes crear un nuevo modelo utilizando los pesos que acabas de entrenar:
export_model = tf.keras.Sequential(
[binary_vectorize_layer, binary_model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_test_ds)
print("Accuracy: {:2.2%}".format(binary_accuracy))
250/250 [==============================] - 1s 4ms/step - loss: 0.5178 - accuracy: 0.8151 Accuracy: 81.51%
Ahora, su modelo puede tomar cadenas sin procesar como entrada y predecir una puntuación para cada etiqueta usando Model.predict
. Defina una función para encontrar la etiqueta con la puntuación máxima:
def get_string_labels(predicted_scores_batch):
predicted_int_labels = tf.argmax(predicted_scores_batch, axis=1)
predicted_labels = tf.gather(raw_train_ds.class_names, predicted_int_labels)
return predicted_labels
Ejecutar inferencia en datos nuevos
inputs = [
"how do I extract keys from a dict into a list?", # 'python'
"debug public static void main(string[] args) {...}", # 'java'
]
predicted_scores = export_model.predict(inputs)
predicted_labels = get_string_labels(predicted_scores)
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label.numpy())
Question: how do I extract keys from a dict into a list? Predicted label: b'python' Question: debug public static void main(string[] args) {...} Predicted label: b'java'
Incluir la lógica de preprocesamiento de texto dentro de su modelo le permite exportar un modelo para producción que simplifica la implementación y reduce el potencial de sesgo de entrenamiento/prueba .
Hay una diferencia de rendimiento a tener en cuenta al elegir dónde aplicar tf.keras.layers.TextVectorization
. Usarlo fuera de su modelo le permite realizar procesamiento de CPU asíncrono y almacenamiento en búfer de sus datos cuando entrena en GPU. Por lo tanto, si está entrenando su modelo en la GPU, probablemente desee optar por esta opción para obtener el mejor rendimiento mientras desarrolla su modelo, luego cambie para incluir la capa TextVectorization
dentro de su modelo cuando esté listo para prepararse para la implementación. .
Visite el tutorial Guardar y cargar modelos para obtener más información sobre cómo guardar modelos.
Ejemplo 2: Predecir el autor de las traducciones de la Ilíada
A continuación se proporciona un ejemplo del uso de tf.data.TextLineDataset
para cargar ejemplos de archivos de texto y TensorFlow Text para preprocesar los datos. Utilizará tres traducciones diferentes al inglés de la misma obra, la Ilíada de Homero, y entrenará un modelo para identificar al traductor dada una sola línea de texto.
Descargar y explorar el conjunto de datos
Los textos de las tres traducciones son de:
Los archivos de texto utilizados en este tutorial se han sometido a algunas tareas típicas de preprocesamiento, como eliminar el encabezado y el pie de página del documento, los números de línea y los títulos de los capítulos.
Descargue estos archivos ligeramente modificados localmente:
DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']
for name in FILE_NAMES:
text_dir = utils.get_file(name, origin=DIRECTORY_URL + name)
parent_dir = pathlib.Path(text_dir).parent
list(parent_dir.iterdir())
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/cowper.txt 819200/815980 [==============================] - 0s 0us/step 827392/815980 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/derby.txt 811008/809730 [==============================] - 0s 0us/step 819200/809730 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/butler.txt 811008/807992 [==============================] - 0s 0us/step 819200/807992 [==============================] - 0s 0us/step [PosixPath('/home/kbuilder/.keras/datasets/derby.txt'), PosixPath('/home/kbuilder/.keras/datasets/butler.txt'), PosixPath('/home/kbuilder/.keras/datasets/cowper.txt'), PosixPath('/home/kbuilder/.keras/datasets/fashion-mnist'), PosixPath('/home/kbuilder/.keras/datasets/mnist.npz')]
Cargue el conjunto de datos
Anteriormente, con tf.keras.utils.text_dataset_from_directory
, todos los contenidos de un archivo se trataban como un solo ejemplo. Aquí, utilizará tf.data.TextLineDataset
, que está diseñado para crear un tf.data.Dataset
a partir de un archivo de texto donde cada ejemplo es una línea de texto del archivo original. TextLineDataset
es útil para datos de texto que se basan principalmente en líneas (por ejemplo, poesía o registros de errores).
Iterar a través de estos archivos, cargando cada uno en su propio conjunto de datos. Cada ejemplo debe etiquetarse individualmente, así que use Dataset.map
para aplicar una función de etiquetado a cada uno. Esto iterará sobre cada ejemplo en el conjunto de datos, devolviendo ( example, label
) pares.
def labeler(example, index):
return example, tf.cast(index, tf.int64)
labeled_data_sets = []
for i, file_name in enumerate(FILE_NAMES):
lines_dataset = tf.data.TextLineDataset(str(parent_dir/file_name))
labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
labeled_data_sets.append(labeled_dataset)
A continuación, combinará estos conjuntos de datos etiquetados en un único conjunto de datos mediante Dataset.concatenate
y lo barajará con Dataset.shuffle
:
BUFFER_SIZE = 50000
BATCH_SIZE = 64
VALIDATION_SIZE = 5000
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
all_labeled_data = all_labeled_data.concatenate(labeled_dataset)
all_labeled_data = all_labeled_data.shuffle(
BUFFER_SIZE, reshuffle_each_iteration=False)
Imprime algunos ejemplos como antes. El conjunto de datos aún no se ha procesado por lotes, por lo tanto, cada entrada en all_labeled_data
corresponde a un punto de datos:
for text, label in all_labeled_data.take(10):
print("Sentence: ", text.numpy())
print("Label:", label.numpy())
Sentence: b'Beneath the yoke the flying coursers led.' Label: 1 Sentence: b'Too free a range, and watchest all I do;' Label: 1 Sentence: b'defence of their ships. Thus would any seer who was expert in these' Label: 2 Sentence: b'"From morn to eve I fell, a summer\'s day,"' Label: 0 Sentence: b'went to the city bearing a message of peace to the Cadmeians; on his' Label: 2 Sentence: b'darkness of the flying night, and tell it to Agamemnon. This might' Label: 2 Sentence: b"To that distinction, Nestor's son, whom yet" Label: 0 Sentence: b'A sounder judge of honour and disgrace:' Label: 1 Sentence: b'He wept as he spoke, and the elders sighed in concert as each thought' Label: 2 Sentence: b'to gather his bones for the silt in which I shall have hidden him, and' Label: 2
Preparar el conjunto de datos para el entrenamiento
En lugar de usar tf.keras.layers.TextVectorization
para preprocesar el conjunto de datos de texto, ahora usará las API de texto de TensorFlow para estandarizar y tokenizar los datos, crear un vocabulario y usar tf.lookup.StaticVocabularyTable
para asignar tokens a números enteros para alimentar al modelo. (Obtenga más información sobre TensorFlow Text ).
Defina una función para convertir el texto a minúsculas y tokenizarlo:
- TensorFlow Text proporciona varios tokenizadores. En este ejemplo, usará
text.UnicodeScriptTokenizer
para tokenizar el conjunto de datos. - Utilizará
Dataset.map
para aplicar la tokenización al conjunto de datos.
tokenizer = tf_text.UnicodeScriptTokenizer()
def tokenize(text, unused_label):
lower_case = tf_text.case_fold_utf8(text)
return tokenizer.tokenize(lower_case)
tokenized_ds = all_labeled_data.map(tokenize)
Puede iterar sobre el conjunto de datos e imprimir algunos ejemplos tokenizados:
for text_batch in tokenized_ds.take(5):
print("Tokens: ", text_batch.numpy())
Tokens: [b'beneath' b'the' b'yoke' b'the' b'flying' b'coursers' b'led' b'.'] Tokens: [b'too' b'free' b'a' b'range' b',' b'and' b'watchest' b'all' b'i' b'do' b';'] Tokens: [b'defence' b'of' b'their' b'ships' b'.' b'thus' b'would' b'any' b'seer' b'who' b'was' b'expert' b'in' b'these'] Tokens: [b'"' b'from' b'morn' b'to' b'eve' b'i' b'fell' b',' b'a' b'summer' b"'" b's' b'day' b',"'] Tokens: [b'went' b'to' b'the' b'city' b'bearing' b'a' b'message' b'of' b'peace' b'to' b'the' b'cadmeians' b';' b'on' b'his']
A continuación, creará un vocabulario ordenando los tokens por frecuencia y conservando los tokens VOCAB_SIZE
principales:
tokenized_ds = configure_dataset(tokenized_ds)
vocab_dict = collections.defaultdict(lambda: 0)
for toks in tokenized_ds.as_numpy_iterator():
for tok in toks:
vocab_dict[tok] += 1
vocab = sorted(vocab_dict.items(), key=lambda x: x[1], reverse=True)
vocab = [token for token, count in vocab]
vocab = vocab[:VOCAB_SIZE]
vocab_size = len(vocab)
print("Vocab size: ", vocab_size)
print("First five vocab entries:", vocab[:5])
Vocab size: 10000 First five vocab entries: [b',', b'the', b'and', b"'", b'of']
Para convertir los tokens en números enteros, use el conjunto de vocab
para crear una tf.lookup.StaticVocabularyTable
. Asignará tokens a números enteros en el rango [ 2
, vocab_size + 2
]. Al igual que con la capa TextVectorization
, 0
está reservado para indicar relleno y 1
está reservado para indicar un token fuera de vocabulario (OOV).
keys = vocab
values = range(2, len(vocab) + 2) # Reserve `0` for padding, `1` for OOV tokens.
init = tf.lookup.KeyValueTensorInitializer(
keys, values, key_dtype=tf.string, value_dtype=tf.int64)
num_oov_buckets = 1
vocab_table = tf.lookup.StaticVocabularyTable(init, num_oov_buckets)
Finalmente, defina una función para estandarizar, tokenizar y vectorizar el conjunto de datos utilizando el tokenizador y la tabla de búsqueda:
def preprocess_text(text, label):
standardized = tf_text.case_fold_utf8(text)
tokenized = tokenizer.tokenize(standardized)
vectorized = vocab_table.lookup(tokenized)
return vectorized, label
Puede probar esto en un solo ejemplo para imprimir la salida:
example_text, example_label = next(iter(all_labeled_data))
print("Sentence: ", example_text.numpy())
vectorized_text, example_label = preprocess_text(example_text, example_label)
print("Vectorized sentence: ", vectorized_text.numpy())
Sentence: b'Beneath the yoke the flying coursers led.' Vectorized sentence: [234 3 811 3 446 749 248 7]
Ahora ejecute la función de preprocesamiento en el conjunto de datos usando Dataset.map
:
all_encoded_data = all_labeled_data.map(preprocess_text)
Dividir el conjunto de datos en conjuntos de entrenamiento y prueba
La capa Keras TextVectorization
también procesa por lotes y rellena los datos vectorizados. Se requiere relleno porque los ejemplos dentro de un lote deben tener el mismo tamaño y forma, pero los ejemplos en estos conjuntos de datos no tienen todos el mismo tamaño: cada línea de texto tiene una cantidad diferente de palabras.
tf.data.Dataset
admite la división y el procesamiento por lotes de conjuntos de datos:
train_data = all_encoded_data.skip(VALIDATION_SIZE).shuffle(BUFFER_SIZE)
validation_data = all_encoded_data.take(VALIDATION_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE)
validation_data = validation_data.padded_batch(BATCH_SIZE)
Ahora, validation_data
y train_data
no son colecciones de pares ( example, label
), sino colecciones de lotes. Cada lote es un par de ( muchos ejemplos , muchas etiquetas ) representados como matrices.
Para ilustrar esto:
sample_text, sample_labels = next(iter(validation_data))
print("Text batch shape: ", sample_text.shape)
print("Label batch shape: ", sample_labels.shape)
print("First text example: ", sample_text[0])
print("First label example: ", sample_labels[0])
Text batch shape: (64, 18) Label batch shape: (64,) First text example: tf.Tensor([234 3 811 3 446 749 248 7 0 0 0 0 0 0 0 0 0 0], shape=(18,), dtype=int64) First label example: tf.Tensor(1, shape=(), dtype=int64)
Dado que usa 0
para relleno y 1
para tokens fuera de vocabulario (OOV), el tamaño del vocabulario se ha incrementado en dos:
vocab_size += 2
Configure los conjuntos de datos para un mejor rendimiento como antes:
train_data = configure_dataset(train_data)
validation_data = configure_dataset(validation_data)
entrenar al modelo
Puede entrenar un modelo en este conjunto de datos como antes:
model = create_model(vocab_size=vocab_size, num_labels=3)
model.compile(
optimizer='adam',
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_data, validation_data=validation_data, epochs=3)
Epoch 1/3 697/697 [==============================] - 27s 9ms/step - loss: 0.5238 - accuracy: 0.7658 - val_loss: 0.3814 - val_accuracy: 0.8306 Epoch 2/3 697/697 [==============================] - 3s 4ms/step - loss: 0.2852 - accuracy: 0.8847 - val_loss: 0.3697 - val_accuracy: 0.8428 Epoch 3/3 697/697 [==============================] - 3s 4ms/step - loss: 0.1924 - accuracy: 0.9279 - val_loss: 0.3917 - val_accuracy: 0.8424
loss, accuracy = model.evaluate(validation_data)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
79/79 [==============================] - 1s 2ms/step - loss: 0.3917 - accuracy: 0.8424 Loss: 0.391705721616745 Accuracy: 84.24%
Exportar el modelo
Para hacer que el modelo sea capaz de tomar cadenas sin procesar como entrada, creará una capa Keras TextVectorization
que realiza los mismos pasos que su función de preprocesamiento personalizada. Dado que ya entrenó un vocabulario, puede usar TextVectorization.set_vocabulary
(en lugar de TextVectorization.adapt
), que entrena un nuevo vocabulario.
preprocess_layer = TextVectorization(
max_tokens=vocab_size,
standardize=tf_text.case_fold_utf8,
split=tokenizer.tokenize,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
preprocess_layer.set_vocabulary(vocab)
export_model = tf.keras.Sequential(
[preprocess_layer, model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# Create a test dataset of raw strings.
test_ds = all_labeled_data.take(VALIDATION_SIZE).batch(BATCH_SIZE)
test_ds = configure_dataset(test_ds)
loss, accuracy = export_model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
2022-02-05 02:26:40.203675: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:907] Skipping loop optimization for Merge node with control input: sequential_4/text_vectorization_2/UnicodeScriptTokenize/Assert_1/AssertGuard/branch_executed/_185 79/79 [==============================] - 6s 8ms/step - loss: 0.4955 - accuracy: 0.7964 Loss: 0.4955357015132904 Accuracy: 79.64%
La pérdida y la precisión del modelo en el conjunto de validación codificado y el modelo exportado en el conjunto de validación sin procesar son las mismas, como se esperaba.
Ejecutar inferencia en datos nuevos
inputs = [
"Join'd to th' Ionians with their flowing robes,", # Label: 1
"the allies, and his armour flashed about him so that he seemed to all", # Label: 2
"And with loud clangor of his arms he fell.", # Label: 0
]
predicted_scores = export_model.predict(inputs)
predicted_labels = tf.argmax(predicted_scores, axis=1)
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label.numpy())
2022-02-05 02:26:43.328949: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:907] Skipping loop optimization for Merge node with control input: sequential_4/text_vectorization_2/UnicodeScriptTokenize/Assert_1/AssertGuard/branch_executed/_185 Question: Join'd to th' Ionians with their flowing robes, Predicted label: 1 Question: the allies, and his armour flashed about him so that he seemed to all Predicted label: 2 Question: And with loud clangor of his arms he fell. Predicted label: 0
Descarga más conjuntos de datos usando TensorFlow Datasets (TFDS)
Puede descargar muchos más conjuntos de datos de TensorFlow Datasets .
En este ejemplo, utilizará el conjunto de datos de IMDB Large Movie Review para entrenar un modelo para la clasificación de opiniones:
# Training set.
train_ds = tfds.load(
'imdb_reviews',
split='train[:80%]',
batch_size=BATCH_SIZE,
shuffle_files=True,
as_supervised=True)
# Validation set.
val_ds = tfds.load(
'imdb_reviews',
split='train[80%:]',
batch_size=BATCH_SIZE,
shuffle_files=True,
as_supervised=True)
Imprime algunos ejemplos:
for review_batch, label_batch in val_ds.take(1):
for i in range(5):
print("Review: ", review_batch[i].numpy())
print("Label: ", label_batch[i].numpy())
Review: b"Instead, go to the zoo, buy some peanuts and feed 'em to the monkeys. Monkeys are funny. People with amnesia who don't say much, just sit there with vacant eyes are not all that funny.<br /><br />Black comedy? There isn't a black person in it, and there isn't one funny thing in it either.<br /><br />Walmart buys these things up somehow and puts them on their dollar rack. It's labeled Unrated. I think they took out the topless scene. They may have taken out other stuff too, who knows? All we know is that whatever they took out, isn't there any more.<br /><br />The acting seemed OK to me. There's a lot of unfathomables tho. It's supposed to be a city? It's supposed to be a big lake? If it's so hot in the church people are fanning themselves, why are they all wearing coats?" Label: 0 Review: b'Well, was Morgan Freeman any more unusual as God than George Burns? This film sure was better than that bore, "Oh, God". I was totally engrossed and LMAO all the way through. Carrey was perfect as the out of sorts anchorman wannabe, and Aniston carried off her part as the frustrated girlfriend in her usual well played performance. I, for one, don\'t consider her to be either ugly or untalented. I think my favorite scene was when Carrey opened up the file cabinet thinking it could never hold his life history. See if you can spot the file in the cabinet that holds the events of his bathroom humor: I was rolling over this one. Well written and even better played out, this comedy will go down as one of this funnyman\'s best.' Label: 1 Review: b'I remember stumbling upon this special while channel-surfing in 1965. I had never heard of Barbra before. When the show was over, I thought "This is probably the best thing on TV I will ever see in my life." 42 years later, that has held true. There is still nothing so amazing, so honestly astonishing as the talent that was displayed here. You can talk about all the super-stars you want to, this is the most superlative of them all!<br /><br />You name it, she can do it. Comedy, pathos, sultry seduction, ballads, Barbra is truly a story-teller. Her ability to pull off anything she attempts is legendary. But this special was made in the beginning, and helped to create the legend that she quickly became. In spite of rising so far in such a short time, she has fulfilled the promise, revealing more of her talents as she went along. But they are all here from the very beginning. You will not be disappointed in viewing this.' Label: 1 Review: b"Firstly, I would like to point out that people who have criticised this film have made some glaring errors. Anything that has a rating below 6/10 is clearly utter nonsense.<br /><br />Creep is an absolutely fantastic film with amazing film effects. The actors are highly believable, the narrative thought provoking and the horror and graphical content extremely disturbing. <br /><br />There is much mystique in this film. Many questions arise as the audience are revealed to the strange and freakish creature that makes habitat in the dark rat ridden tunnels. How was 'Craig' created and what happened to him?<br /><br />A fantastic film with a large chill factor. A film with so many unanswered questions and a film that needs to be appreciated along with others like 28 Days Later, The Bunker, Dog Soldiers and Deathwatch.<br /><br />Look forward to more of these fantastic films!!" Label: 1 Review: b"I'm sorry but I didn't like this doc very much. I can think of a million ways it could have been better. The people who made it obviously don't have much imagination. The interviews aren't very interesting and no real insight is offered. The footage isn't assembled in a very informative way, either. It's too bad because this is a movie that really deserves spellbinding special features. One thing I'll say is that Isabella Rosselini gets more beautiful the older she gets. All considered, this only gets a '4.'" Label: 0
Ahora puede preprocesar los datos y entrenar un modelo como antes.
Preparar el conjunto de datos para el entrenamiento
vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
# Make a text-only dataset (without labels), then call `TextVectorization.adapt`.
train_text = train_ds.map(lambda text, labels: text)
vectorize_layer.adapt(train_text)
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
train_ds = train_ds.map(vectorize_text)
val_ds = val_ds.map(vectorize_text)
# Configure datasets for performance as before.
train_ds = configure_dataset(train_ds)
val_ds = configure_dataset(val_ds)
Crear, configurar y entrenar el modelo.
model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=1)
model.summary()
Model: "sequential_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_2 (Embedding) (None, None, 64) 640064 conv1d_2 (Conv1D) (None, None, 64) 20544 global_max_pooling1d_2 (Glo (None, 64) 0 balMaxPooling1D) dense_3 (Dense) (None, 1) 65 ================================================================= Total params: 660,673 Trainable params: 660,673 Non-trainable params: 0 _________________________________________________________________
model.compile(
loss=losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = model.fit(train_ds, validation_data=val_ds, epochs=3)
Epoch 1/3 313/313 [==============================] - 3s 7ms/step - loss: 0.5417 - accuracy: 0.6618 - val_loss: 0.3752 - val_accuracy: 0.8244 Epoch 2/3 313/313 [==============================] - 1s 4ms/step - loss: 0.2996 - accuracy: 0.8680 - val_loss: 0.3165 - val_accuracy: 0.8632 Epoch 3/3 313/313 [==============================] - 1s 4ms/step - loss: 0.1845 - accuracy: 0.9276 - val_loss: 0.3217 - val_accuracy: 0.8674
loss, accuracy = model.evaluate(val_ds)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
79/79 [==============================] - 0s 2ms/step - loss: 0.3217 - accuracy: 0.8674 Loss: 0.32172858715057373 Accuracy: 86.74%
Exportar el modelo
export_model = tf.keras.Sequential(
[vectorize_layer, model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# 0 --> negative review
# 1 --> positive review
inputs = [
"This is a fantastic movie.",
"This is a bad movie.",
"This movie was so bad that it was good.",
"I will never say yes to watching this movie.",
]
predicted_scores = export_model.predict(inputs)
predicted_labels = [int(round(x[0])) for x in predicted_scores]
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label)
Question: This is a fantastic movie. Predicted label: 1 Question: This is a bad movie. Predicted label: 0 Question: This movie was so bad that it was good. Predicted label: 0 Question: I will never say yes to watching this movie. Predicted label: 0
Conclusión
Este tutorial demostró varias formas de cargar y preprocesar texto. Como siguiente paso, puede explorar tutoriales adicionales de preprocesamiento de texto de TensorFlow Text , como:
También puede encontrar nuevos conjuntos de datos en TensorFlow Datasets . Y, para obtener más información sobre tf.data
, consulte la guía sobre la creación de canalizaciones de entrada .