python - 分布式 TensorFlow Estimator 执行不会触发评估或导出
问题描述
我正在使用 tensorflow Estimators 测试分布式训练。在我的示例中,我使用自定义估计器拟合了一个简单的正弦函数tf.estimator.train_and_evaluation
。经过培训和评估后,我想导出模型以准备好用于tensorflow 服务。然而,评估和导出仅在以非分布式方式执行估计器时触发。
模型和估计器定义如下:
def my_model(features, labels, mode):
# define simple dense network
net = tf.layers.dense(features['x'], units=8, activation=tf.nn.tanh)
net = tf.layers.dense(net, units=8, activation=tf.nn.tanh)
net = tf.layers.dense(net, units=8, activation=tf.nn.tanh)
net = tf.layers.dense(net, units=8, activation=tf.nn.tanh)
net = tf.layers.dense(net, units=8, activation=tf.nn.tanh)
net = tf.layers.dense(net, units=8, activation=tf.nn.tanh)
net = tf.layers.dense(net, units=8, activation=tf.nn.tanh)
net = tf.layers.dense(net, units=8, activation=tf.nn.tanh)
# output layer
predictions = tf.layers.dense(net, units=1, activation=tf.nn.tanh)
if mode == tf.estimator.ModeKeys.PREDICT:
# define output message for tensorflow serving
export_outputs = {'predict_output': tf.estimator.export.PredictOutput({"predictions": predictions})}
return tf.estimator.EstimatorSpec(mode=mode, predictions={'predictions': predictions}, export_outputs=export_outputs)
elif mode == tf.estimator.ModeKeys.EVAL:
# for evaluation simply use mean squared error
loss = tf.losses.mean_squared_error(labels=labels, predictions=predictions)
metrics = {'mse': tf.metrics.mean_squared_error(labels, predictions)}
return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=metrics)
elif mode == tf.estimator.ModeKeys.TRAIN:
# train on mse with Adagrad optimizer
loss = tf.losses.mean_squared_error(labels=labels, predictions=predictions)
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
else:
raise ValueError("unhandled mode: %s" % str(mode))
def main(_):
# prepare training data
default_batch_size = 50
examples = [{'x': x, 'y': math.sin(x)} for x in [random.random()*2*math.pi for _ in range(10000)]]
estimator = tf.estimator.Estimator(model_fn=my_model,
config=tf.estimator.RunConfig(model_dir='sin_model',
save_summary_steps=100))
# function converting examples to dataset
def dataset_fn():
# returns a dataset serving batched (feature_map, label)-pairs
# e.g. ({'x': [1.0, 0.3, 1.1...]}, [0.84, 0.29, 0.89...])
return tf.data.Dataset.from_generator(
lambda: iter(examples),
output_types={"x": tf.float32, "y": tf.float32},
output_shapes={"x": [], "y": []}) \
.map(lambda x: ({'x': [x['x']]}, [x['y']])) \
.repeat() \
.batch(default_batch_size)
# function to export model to be used for serving
feature_spec = {'x': tf.FixedLenFeature([1], tf.float32)}
def serving_input_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[default_batch_size])
receiver_tensors = {'examples': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
# train, evaluate and export
train_spec = tf.estimator.TrainSpec(input_fn=dataset_fn, max_steps=1000)
eval_spec = tf.estimator.EvalSpec(input_fn=dataset_fn,
steps=100,
exporters=[tf.estimator.FinalExporter('sin', serving_input_fn)])
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
if __name__ == '__main__':
tf.app.run(main)
在单个进程中执行此代码时,我收到一个包含模型检查点、评估数据和模型导出的输出文件夹
$ ls sin_model/
checkpoint model.ckpt-0.index
eval model.ckpt-0.meta
events.out.tfevents.1532426226.simon model.ckpt-1000.data-00000-of-00001
export model.ckpt-1000.index
graph.pbtxt model.ckpt-1000.meta
model.ckpt-0.data-00000-of-00001
但是,在分发训练过程时(在此测试设置中仅在本地机器上),缺少 eval 和 export 文件夹。
我使用以下集群配置启动各个节点:
{"cluster": {
"ps": ["localhost:2222"],
"chief": ["localhost:2223"],
"worker": ["localhost:2224"]
}
ps服务器的启动如下
$ TF_CONFIG='{"cluster": {"chief": ["localhost:2223"], "worker": ["localhost:2224"], "ps": ["localhost:2222"]}, "task": {"type": "ps", "index": 0}}' CUDA_VISIBLE_DEVICES= python custom_estimator.py
2018-07-24 12:09:04.913967: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE
2018-07-24 12:09:04.914008: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:132] retrieving CUDA diagnostic information for host: simon
2018-07-24 12:09:04.914013: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:139] hostname: simon
2018-07-24 12:09:04.914035: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] libcuda reported version is: 384.130.0
2018-07-24 12:09:04.914059: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:167] kernel reported version is: 384.130.0
2018-07-24 12:09:04.914079: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:249] kernel version seems to match DSO: 384.130.0
2018-07-24 12:09:04.914961: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job chief -> {0 -> localhost:2223}
2018-07-24 12:09:04.914971: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222}
2018-07-24 12:09:04.914976: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:2224}
2018-07-24 12:09:04.915658: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:369] Started server with target: grpc://localhost:2222
(我附加CUDA_VISIBLE_DEVICES=
到命令行以防止工作人员和主管分配 GPU 内存。这会导致failed call to cuInit: CUDA_ERROR_NO_DEVICE
错误,但这并不重要)
然后开始如下
$ TF_CONFIG='{"cluster": {"chief": ["localhost:2223"], "worker": ["localhost:2224"], "ps": ["localhost:2222"]}, "task": {"type": "chief", "index": 0}}' CUDA_VISIBLE_DEVICES= python custom_estimator.py
2018-07-24 12:09:10.532171: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE
2018-07-24 12:09:10.532234: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:132] retrieving CUDA diagnostic information for host: simon
2018-07-24 12:09:10.532241: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:139] hostname: simon
2018-07-24 12:09:10.532298: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] libcuda reported version is: 384.130.0
2018-07-24 12:09:10.532353: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:167] kernel reported version is: 384.130.0
2018-07-24 12:09:10.532359: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:249] kernel version seems to match DSO: 384.130.0
2018-07-24 12:09:10.533195: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job chief -> {0 -> localhost:2223}
2018-07-24 12:09:10.533207: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222}
2018-07-24 12:09:10.533211: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:2224}
2018-07-24 12:09:10.533835: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:369] Started server with target: grpc://localhost:2223
2018-07-24 12:09:14.038636: I tensorflow/core/distributed_runtime/master_session.cc:1165] Start master session 71a2748ad69725ae with config: allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } }
然后工人按如下方式启动:
$ TF_CONFIG='{"cluster": {"chief": ["localhost:2223"], "worker": ["localhost:2224"], "ps": ["localhost:2222"]}, "task": {"type": "worker", "index": 0}}' CUDA_VISIBLE_DEVICES= python custom_estimator.py
2018-07-24 12:09:13.172260: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE
2018-07-24 12:09:13.172320: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:132] retrieving CUDA diagnostic information for host: simon
2018-07-24 12:09:13.172327: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:139] hostname: simon
2018-07-24 12:09:13.172362: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] libcuda reported version is: 384.130.0
2018-07-24 12:09:13.172399: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:167] kernel reported version is: 384.130.0
2018-07-24 12:09:13.172405: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:249] kernel version seems to match DSO: 384.130.0
2018-07-24 12:09:13.173230: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job chief -> {0 -> localhost:2223}
2018-07-24 12:09:13.173242: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:2222}
2018-07-24 12:09:13.173247: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:2224}
2018-07-24 12:09:13.173783: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:369] Started server with target: grpc://localhost:2224
2018-07-24 12:09:18.774264: I tensorflow/core/distributed_runtime/master_session.cc:1165] Start master session 1d13ac84816fdc80 with config: allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } }
短时间后,主要进程停止,该sin_model
文件夹存在模型检查点但没有导出或评估:
$ ls sin_model/
checkpoint model.ckpt-0.meta
events.out.tfevents.1532426950.simon model.ckpt-1001.data-00000-of-00001
graph.pbtxt model.ckpt-1001.index
model.ckpt-0.data-00000-of-00001 model.ckpt-1001.meta
model.ckpt-0.index
为了在分布式设置中评估或导出是否需要任何进一步的配置?
我正在使用 python 3.5 和 tensorflow 1.8
解决方案
type
在分布式模式下,您可以通过将任务设置为:与训练并行运行评估evaluator
:
{
"cluster": {
"ps": ["localhost:2222"],
"chief": ["localhost:2223"],
"worker": ["localhost:2224"]
},
"task": {
"type": "evaluator", "index": 0
},
"environment": "cloud"
}
您无需evaluator
在集群定义中进行定义。另外,不确定这是否与您的情况有关,但也许environment: 'cloud'
在您的集群配置中设置可能会有所帮助。
推荐阅读
- javascript - 在发送到服务器端之前在客户端散列密码
- java - 在 json 数据中获取换行符转义字符
- android - 无法安全释放相机
- primefaces - 嗨,我有一个带有菜单栏的 JSF 页面。我想在页面顶部显示一条消息,以显示对话框中的操作结果
- json - 将 JSON 数组解析为 Map
使用 Kotlinx.serialization - mongodb-stitch - MongoDB Stitch 自定义新用户电子邮件确认
- uitableview - 使用 UITableView 时发生不需要的 segue
- sql - 在表格中选择“Ratio_to_report”
- html - 是否可以
- python - 在 Python 中计算符号数组的指数