View source on GitHub |
Debugger for Quantized TensorFlow Lite debug mode models.
tf.lite.experimental.QuantizationDebugger(
quant_debug_model_path: Optional[str] = None,
quant_debug_model_content: Optional[bytes] = None,
float_model_path: Optional[str] = None,
float_model_content: Optional[bytes] = None,
debug_dataset: Optional[Callable[[], Iterable[Sequence[np.ndarray]]]] = None,
debug_options: Optional[tf.lite.experimental.QuantizationDebugOptions
] = None,
converter: Optional[TFLiteConverter] = None
) -> None
This can run the TensorFlow Lite converted models equipped with debug ops and collect debug information. This debugger calculates statistics from user-defined post-processing functions as well as default ones.
Raises | |
---|---|
ValueError
|
If the debugger was unable to be created. |
Attributes | |
---|---|
options
|
Methods
get_debug_quantized_model
get_debug_quantized_model() -> bytes
Returns an instrumented quantized model.
Convert the quantized model with the initialized converter and return bytes for model. The model will be instrumented with numeric verification operations and should only be used for debugging.
Returns | |
---|---|
Model bytes corresponding to the model. |
Raises | |
---|---|
ValueError
|
if converter is not passed to the debugger. |
get_nondebug_quantized_model
get_nondebug_quantized_model() -> bytes
Returns a non-instrumented quantized model.
Convert the quantized model with the initialized converter and return bytes for nondebug model. The model will not be instrumented with numeric verification operations.
Returns | |
---|---|
Model bytes corresponding to the model. |
Raises | |
---|---|
ValueError
|
if converter is not passed to the debugger. |
layer_statistics_dump
layer_statistics_dump(
file: IO[str]
) -> None
Dumps layer statistics into file, in csv format.
Args | |
---|---|
file
|
file, or file-like object to write. |
run
run() -> None
Runs models and gets metrics.