Description of TF Lite’s Toco converter args for quantization aware training

You should never need to manually set the quantization stats. Have you tried the post-training-quantization tutorials? https://www.tensorflow.org/lite/performance/post_training_integer_quant Basically they set the quantization options: converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 Then they pass a “representative dataset” to the converter, so that the converter can run the model a few batches to gather the … Read more

How can I test a .tflite model to prove that it behaves as the original model using the same Test Data?

You may use TensorFlow Lite Python interpreter to test your tflite model. It allows you to feed input data in python shell and read the output directly like you are just using a normal tensorflow model. I have answered this question here. And you can read this TensorFlow lite official guide for detailed information. You … Read more