首页 > 解决方案 > 从保存的模型转换为量化。tflite, 'op: CUSTOM 尚不支持量化'

问题描述

我读了类似的问题,Tensorflow (TF2) quantization to full integer error with TFLiteConverter RuntimeError: Quantization not yet supported for op: 'CUSTOM'
但是它无法在 TF 2.4.1 解决这个问题。

我推荐这个 tensorflow 站点使用仅整数量化进行转换。 https://tensorflow.google.cn/lite/performance/post_training_integer_quant
但是,它返回这个错误:

RuntimeError:操作尚不支持量化:'CUSTOM'。

代码:

import tensorflow as tf
import numpy as np

def representative_data_gen():
   for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
       yield [input_value]

converter = tf.lite.TFLiteConverter.from_saved_model(model)

# This enables quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
# set the representative dataset for the converter so we can quantize the activations
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
#write the quantized tflite model to a file
with open('my_quant.tflite', 'wb') as f:
  f.write(tflite_model)

如何解决这个问题?
谢谢

标签: tensorflow-litequantization

解决方案


你能尝试使用这些标志吗

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.experimental_new_quantizer = True

反而。

“TFLITE_BUILTINS_INT8”表示完全量化的操作集,我们没有自定义操作的量化内核。


推荐阅读