tensorflow - 如何使用 tf.saved_model 加载模型并调用预测函数 [TENSORFLOW 2.0 API]
问题描述
我对 tensorflow 尤其是 2.0 非常陌生,因为没有足够的关于该 API 的示例,但它似乎比 1.x 方便得多 到目前为止,我设法使用 tf.estimator api 训练了一个线性模型,然后设法保存它使用 tf.estimator.exporter。
之后我想使用 tf.saved_model api 加载这个模型,我想我成功了,但我对我的程序有一些疑问,所以这里是我的代码的快速浏览:
所以我有一个使用 tf.feature_column api 创建的功能数组,它看起来像这样:
feature_columns =
[NumericColumn(key='geoaccuracy', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
NumericColumn(key='longitude', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
NumericColumn(key='latitude', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
NumericColumn(key='bidfloor', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
VocabularyListCategoricalColumn(key='adid', vocabulary_list=('115', '124', '139', '122', '121', '146', '113', '103', '123', '104', '147', '114', '149', '148'), dtype=tf.string, default_value=-1, num_oov_buckets=0),
VocabularyListCategoricalColumn(key='campaignid', vocabulary_list=('36', '31', '33', '28'), dtype=tf.string, default_value=-1, num_oov_buckets=0),
VocabularyListCategoricalColumn(key='exchangeid', vocabulary_list=('1241', '823', '1240', '1238'), dtype=tf.string, default_value=-1, num_oov_buckets=0),
...]
之后,我以这种方式使用我的特征列数组定义一个估计器,并对其进行训练。直到这里,没问题。
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns)
在训练我的模型后,我想保存它,所以这里开始怀疑,这就是我如何进行但不确定这是正确的方法:
serving_input_parse = tf.feature_column.make_parse_example_spec(feature_columns=feature_columns)
""" view of the variable : serving_input_parse =
{'adid': VarLenFeature(dtype=tf.string),
'at': VarLenFeature(dtype=tf.string),
'basegenres': VarLenFeature(dtype=tf.string),
'bestkw': VarLenFeature(dtype=tf.string),
'besttopic': VarLenFeature(dtype=tf.string),
'bidfloor': FixedLenFeature(shape=(1,), dtype=tf.float32, default_value=None),
'browserid': VarLenFeature(dtype=tf.string),
'browserlanguage': VarLenFeature(dtype=tf.string)
...} """
# exporting the model :
linear_est.export_saved_model(export_dir_base='./saved',
serving_input_receiver_fn=tf.estimator.export.build_parsing_serving_input_receiver_fn(serving_input_receiver_fn),
as_text=True)
现在我尝试加载它,但我不知道如何使用加载的模型来调用预测,例如使用来自 pandas 数据帧的原始数据
loaded = tf.saved_model.load('saved/1573144361/')
还有一件事,我试图查看模型的签名,但我无法真正理解我的输入形状发生了什么
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['classification']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: input_example_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_STRING
shape: (-1, 2)
name: head/Tile:0
outputs['scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: head/predictions/probabilities:0
Method name is: tensorflow/serving/classify
signature_def['predict']:
The given SavedModel SignatureDef contains the following input(s):
inputs['examples'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: input_example_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['all_class_ids'] tensor_info:
dtype: DT_INT32
shape: (-1, 2)
name: head/predictions/Tile:0
outputs['all_classes'] tensor_info:
dtype: DT_STRING
shape: (-1, 2)
name: head/predictions/Tile_1:0
outputs['class_ids'] tensor_info:
dtype: DT_INT64
shape: (-1, 1)
name: head/predictions/ExpandDims:0
outputs['classes'] tensor_info:
dtype: DT_STRING
shape: (-1, 1)
name: head/predictions/str_classes:0
outputs['logistic'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: head/predictions/logistic:0
outputs['logits'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: linear/linear_model/linear/linear_model/linear/linear_model/weighted_sum:0
outputs['probabilities'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: head/predictions/probabilities:0
Method name is: tensorflow/serving/predict
signature_def['regression']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: input_example_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['outputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: head/predictions/logistic:0
Method name is: tensorflow/serving/regress
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: input_example_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_STRING
shape: (-1, 2)
name: head/Tile:0
outputs['scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: head/predictions/probabilities:0
Method name is: tensorflow/serving/classify
解决方案
该saved_model.load(...)
文档演示了这样的基本机制:
imported = tf.saved_model.load(path)
f = imported.signatures["serving_default"]
print(f(x=tf.constant([[1.]])))
我自己对此还是很陌生,但serving_default
似乎是使用时的默认签名saved_model.save(...)
。
(我的理解是saved_model.save(...)
不保存模型,它保存图形。为了解释图形,您需要在图形上显式存储“签名”定义操作。如果您不显式执行此操作,则“serve_default " 将是您唯一的签名。)
我在下面提供了一个实现。有几个细节值得注意:
- 输入需要是张量;所以我需要手动进行转换。
- 输出是一个字典。文档将其描述为“具有从签名键映射到函数的签名属性的可跟踪对象”。
在我的例子中,字典的键是一个相对任意的“dense_83”。这似乎有点……具体。所以我概括了使用迭代器忽略键的解决方案:
import tensorflow as tf
input_data = tf.constant(input_data, dtype=tf.float32)
prediction_tensors = signature_collection.signatures["serving_default"](input_data)
for _, values in prediction_tensors.items():
predictions = values.numpy()[0]
return predictions
raise Exception("Expected a response from predict(...).")
推荐阅读
- ruby-on-rails - 将 Rails 5.1 升级到 5.2,找不到 gem 版本,但似乎满足了所有要求
- c++ - Visual Studio 2017 命令行构建速度比 IDE 慢
- python - 从文件中的列获取数据
- node.js - 在各种事件下自动打开串口并记录数据
- javascript - 在 reactjs 前端使用 fetch 从 express 服务器获取 mongodb 数据
- android - Android Studio - 即使在“修复”问题后自动导入也不起作用
- c# - 试图将我的流保存在我的视图模型中
- ubuntu - wsl 2 ubuntu mariadb .my.cnf.42' 被忽略
- browser - 在插件中使用假视频我不能在赛普拉斯的测试流程中使用两个视频
- c# - 在 ASP.NET Core 中管理动态 URL