View source on GitHub |
Enum defining the optimizations to apply when generating a tflite model.
DEFAULT Default optimization strategy that quantizes model weights. Enhanced optimizations are gained by providing a representative dataset that quantizes biases and activations as well. Converter will do its best to reduce size and latency, while minimizing the loss in accuracy.
OPTIMIZE_FOR_SIZE Deprecated. Does the same as DEFAULT.
OPTIMIZE_FOR_LATENCY Deprecated. Does the same as DEFAULT.
EXPERIMENTAL_SPARSITY Experimental flag, subject to change.
Enable optimization by taking advantage of the sparse model weights
trained with pruning.
The converter will inspect the sparsity pattern of the model weights and
do its best to improve size and latency.
The flag can be used alone to optimize float32 models with sparse weights.
It can also be used together with the DEFAULT optimization mode to
optimize quantized models with sparse weights.