Convert a TensorFlow model into output_format
.
tf.compat.v1.lite.TFLiteConverter(
graph_def,
input_tensors,
output_tensors,
input_arrays_with_shape=None,
output_arrays=None,
experimental_debug_info_func=None
)
This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras
model into either a TFLite FlatBuffer or graph visualization.
Example usage |
# Converting a GraphDef from session.
converter = tf.compat.v1.lite.TFLiteConverter.from_session(
sess, in_tensors, out_tensors)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a GraphDef from file.
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a SavedModel.
converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(
saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a tf.keras model.
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(
keras_model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
|
Args |
graph_def
|
Frozen TensorFlow GraphDef.
|
input_tensors
|
List of input tensors. Type and shape are computed using
foo.shape and foo.dtype .
|
output_tensors
|
List of output tensors (only .name is used from this).
|
input_arrays_with_shape
|
Tuple of strings representing input tensor names
and list of integers representing input shapes
(e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded
into TensorFlow and when input_tensors and output_tensors are
None. (default None)
|
output_arrays
|
List of output tensors to freeze graph with. Use only when
graph cannot be loaded into TensorFlow and when input_tensors and
output_tensors are None. (default None)
|
experimental_debug_info_func
|
An experimental function to retrieve the
graph debug info for a set of nodes from the graph_def .
|
Raises |
ValueError
|
Invalid arguments.
|
Attributes |
optimizations
|
Experimental flag, subject to change. Set of optimizations to
apply. e.g {tf.lite.Optimize.DEFAULT}. (default None, must be None or a
set of values of type tf.lite.Optimize )
|
representative_dataset
|
A generator function used for integer quantization
where each generated sample has the same order, type and shape as the
inputs to the model. Usually, this is a small subset of a few hundred
samples randomly chosen, in no particular order, from the training or
evaluation dataset. This is an optional attribute, but required for full
integer quantization, i.e, if tf.int8 is the only supported type in
target_spec.supported_types . Refer to tf.lite.RepresentativeDataset .
(default None)
|
target_spec
|
Experimental flag, subject to change. Specifications of target
device, including supported ops set, supported types and a set of user's
defined TensorFlow operators required in the TensorFlow Lite runtime.
Refer to tf.lite.TargetSpec .
|
inference_type
|
Data type of numeric arrays, excluding the input layer.
(default tf.float32, must be in {tf.float32, tf.int8, tf.uint8})
|
inference_input_type
|
Data type of the numeric arrays in the input layer. If
inference_input_type is in {tf.int8, tf.uint8}, then
quantized_input_stats must be provided. (default is the value assigned
to inference_type , must be in {tf.float32, tf.int8, tf.uint8})
|
inference_output_type
|
Data type of the numeric arrays in the output layer.
(default is the value assigned to inference_type , must be in
{tf.float32, tf.int8, tf.uint8})
|
quantized_input_stats
|
Map of input tensor names to a tuple of floats
representing the mean and standard deviation of the training data.
(e.g., {"foo" : (0., 1.)}). Required if inference_input_type is tf.int8
or tf.uint8. (default None)
|
default_ranges_stats
|
Tuple of integers (min, max) representing range values
for all numeric arrays without a specified range. Intended for
experimenting with quantization via "dummy quantization". (default None)
|
allow_custom_ops
|
Boolean indicating whether to allow custom operations.
When False any unknown operation is an error. When True, custom ops are
created for any op that is unknown. The developer will need to provide
these to the TensorFlow Lite runtime with a custom resolver. (default
False)
|
drop_control_dependency
|
Boolean indicating whether to drop control
dependencies silently. This is due to TFLite not supporting control
dependencies. (default True)
|
reorder_across_fake_quant
|
Boolean indicating whether to reorder FakeQuant
nodes in unexpected locations. Used when the location of the FakeQuant
nodes is preventing graph transformations necessary to convert the graph.
Results in a graph that differs from the quantized training graph,
potentially causing differing arithmetic behavior. (default False)
|
change_concat_input_ranges
|
Boolean to change behavior of min/max ranges for
inputs and outputs of the concat operator for quantized models. Changes
the ranges of concat operator overlap when true. (default False)
|
output_format
|
Output file format. (default
tf.compat.v1.lite.constants.TFLITE, must be in
{tf.compat.v1.lite.constants.TFLITE,
tf.compat.v1.lite.constants.GRAPHVIZ_DOT})
|
dump_graphviz_dir
|
Full filepath of folder to dump the graphs at various
stages of processing GraphViz .dot files. Preferred over
output_format=tf.compat.v1.lite.constants.GRAPHVIZ_DOT in order to keep
the requirements of the output file. (default None)
|
dump_graphviz_video
|
Boolean indicating whether to dump the GraphViz .dot
files after every graph transformation. Requires the dump_graphviz_dir
flag to be specified. (default False)
|
conversion_summary_dir
|
Full path of the directory to store conversion logs.
(default None)
|
exclude_conversion_metadata
|
Whether not to embed the conversion metadata
into the converted model. (default False)
|
target_ops
|
Deprecated. Please use target_spec.supported_ops instead.
|
post_training_quantize
|
Deprecated. Please use optimizations instead and
set it to {tf.lite.Optimize.DEFAULT} . (default False)
|
experimental_new_converter
|
Experimental flag, subject to change. Enables
MLIR-based conversion. (default True)
|
experimental_new_quantizer
|
Experimental flag, subject to change. Enables
MLIR-based quantization conversion instead of Flatbuffer-based conversion.
(default True)
|
Methods
convert
View source
convert()
Converts a TensorFlow GraphDef based on instance variables.
Returns |
The converted data in serialized format. Either a TFLite Flatbuffer or a
Graphviz graph depending on value in output_format .
|
Raises |
ValueError
|
Input shape is not specified.
None value for dimension in input_tensor.
|
from_frozen_graph
View source
@classmethod
from_frozen_graph(
graph_def_file, input_arrays, output_arrays, input_shapes=None
)
Creates a TFLiteConverter class from a file containing a frozen GraphDef.
Args |
graph_def_file
|
Full filepath of file containing frozen GraphDef.
|
input_arrays
|
List of input tensors to freeze graph with.
|
output_arrays
|
List of output tensors to freeze graph with.
|
input_shapes
|
Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" :
None}). (default None)
|
Returns |
TFLiteConverter class.
|
Raises |
IOError
|
File not found.
Unable to parse input file.
|
ValueError
|
The graph is not frozen.
input_arrays or output_arrays contains an invalid tensor name.
input_shapes is not correctly defined when required
|
from_keras_model_file
View source
@classmethod
from_keras_model_file(
model_file,
input_arrays=None,
input_shapes=None,
output_arrays=None,
custom_objects=None
)
Creates a TFLiteConverter class from a tf.keras model file.
Args |
model_file
|
Full filepath of HDF5 file containing the tf.keras model.
|
input_arrays
|
List of input tensors to freeze graph with. Uses input
arrays from SignatureDef when none are provided. (default None)
|
input_shapes
|
Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" :
None}). (default None)
|
output_arrays
|
List of output tensors to freeze graph with. Uses output
arrays from SignatureDef when none are provided. (default None)
|
custom_objects
|
Dict mapping names (strings) to custom classes or
functions to be considered during model deserialization. (default None)
|
Returns |
TFLiteConverter class.
|
from_saved_model
View source
@classmethod
from_saved_model(
saved_model_dir,
input_arrays=None,
input_shapes=None,
output_arrays=None,
tag_set=None,
signature_key=None
)
Creates a TFLiteConverter class from a SavedModel.
Args |
saved_model_dir
|
SavedModel directory to convert.
|
input_arrays
|
List of input tensors to freeze graph with. Uses input
arrays from SignatureDef when none are provided. (default None)
|
input_shapes
|
Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" :
None}). (default None)
|
output_arrays
|
List of output tensors to freeze graph with. Uses output
arrays from SignatureDef when none are provided. (default None)
|
tag_set
|
Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present. (default
{tf.saved_model.SERVING})
|
signature_key
|
Key identifying SignatureDef containing inputs and outputs.
(default tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY)
|
Returns |
TFLiteConverter class.
|
from_session
View source
@classmethod
from_session(
sess, input_tensors, output_tensors
)
Creates a TFLiteConverter class from a TensorFlow Session.
Args |
sess
|
TensorFlow Session.
|
input_tensors
|
List of input tensors. Type and shape are computed using
foo.shape and foo.dtype .
|
output_tensors
|
List of output tensors (only .name is used from this).
|
Returns |
TFLiteConverter class.
|
View source
get_input_arrays()
Returns a list of the names of the input tensors.