python - 如何在 MXNet 中从 AZ 部署一个简单的神经网络
问题描述
我正在尝试在 MXNet 中构建和部署一个简单的神经网络,并使用 mxnet-model-server 将其部署在服务器上。
最大的问题是部署模型 - 模型服务器在上传 .mar 文件后崩溃,但我不知道问题可能是什么。
我使用以下代码创建了一个自定义(但非常简单)的神经网络进行测试:
from __future__ import print_function
import numpy as np
import mxnet as mx
from mxnet import nd, autograd, gluon
data_ctx = mx.cpu()
model_ctx = mx.cpu()
# fix the seed
np.random.seed(42)
mx.random.seed(42)
num_examples = 1000
X = mx.random.uniform(shape=(num_examples, 49))
y = mx.random.uniform(shape=(num_examples, 1))
dataset_train = mx.gluon.data.dataset.ArrayDataset(X, y)
dataset_test = dataset_train
data_loader_train = mx.gluon.data.DataLoader(dataset_train, batch_size=25)
data_loader_test = mx.gluon.data.DataLoader(dataset_test, batch_size=25)
num_outputs = 2
net = gluon.nn.HybridSequential()
net.hybridize()
with net.name_scope():
net.add(gluon.nn.Dense(49, activation="relu"))
net.add(gluon.nn.Dense(64, activation="relu"))
net.add(gluon.nn.Dense(num_outputs))
net.collect_params().initialize(mx.init.Normal(sigma=.1), ctx=model_ctx)
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .01})
epochs = 1
smoothing_constant = .01
for e in range(epochs):
cumulative_loss = 0
for i, (data, label) in enumerate(data_loader_train):
data = data.as_in_context(model_ctx).reshape((-1, 49))
label = label.as_in_context(model_ctx)
with autograd.record():
output = net(data)
loss = softmax_cross_entropy(output, label)
loss.backward()
trainer.step(data.shape[0])
cumulative_loss += nd.sum(loss).asscalar()
接下来,使用以下方法导出模型:
net.export("model_files/my_project")
结果是一个 .json 和 .params 文件。
我创建了一个签名.json
{
"inputs": [
{
"data_name": "data",
"data_shape": [
1,
49
]
}
]
}
模型处理程序与 mxnet 教程中的相同:
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
# http://www.apache.org/licenses/LICENSE-2.0
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
"""
ModelHandler defines a base model handler.
"""
import logging
import time
class ModelHandler(object):
"""
A base Model handler implementation.
"""
def __init__(self):
self.error = None
self._context = None
self._batch_size = 0
self.initialized = False
def initialize(self, context):
"""
Initialize model. This will be called during model loading time
:param context: Initial context contains model server system properties.
:return:
"""
self._context = context
self._batch_size = context.system_properties["batch_size"]
self.initialized = True
def preprocess(self, batch):
"""
Transform raw input into model input data.
:param batch: list of raw requests, should match batch size
:return: list of preprocessed model input data
"""
assert self._batch_size == len(batch), "Invalid input batch size: {}".format(len(batch))
return None
def inference(self, model_input):
"""
Internal inference methods
:param model_input: transformed model input data
:return: list of inference output in NDArray
"""
return None
def postprocess(self, inference_output):
"""
Return predict result in batch.
:param inference_output: list of inference output
:return: list of predict results
"""
return ["OK"] * self._batch_size
def handle(self, data, context):
"""
Custom service entry point function.
:param data: list of objects, raw input from request
:param context: model server context
:return: list of outputs to be send back to client
"""
self.error = None # reset earlier errors
try:
preprocess_start = time.time()
data = self.preprocess(data)
inference_start = time.time()
data = self.inference(data)
postprocess_start = time.time()
data = self.postprocess(data)
end_time = time.time()
metrics = context.metrics
metrics.add_time("PreprocessTime", round((inference_start - preprocess_start) * 1000, 2))
metrics.add_time("InferenceTime", round((postprocess_start - inference_start) * 1000, 2))
metrics.add_time("PostprocessTime", round((end_time - postprocess_start) * 1000, 2))
return data
except Exception as e:
logging.error(e, exc_info=True)
request_processor = context.request_processor
request_processor.report_status(500, "Unknown inference error")
return [str(e)] * self._batch_size
接下来,我使用以下方法创建了 .mar 文件:
model-archiver --model-name my_project --model-path my_project --handler ssd_service:handle
在服务器上启动模型:
mxnet-model-server --start --model_store my_project --models ssd=my_project.mar
我确实遵循了以下每个教程: https ://github.com/awslabs/mxnet-model-server
但是,服务器正在崩溃。worker 死掉,后端 worker 死掉,worker 断开连接,Load model failed: ssd,error: worker dead
我完全不知道该怎么做,所以如果你能帮助我,我会很高兴的!
最好的
解决方案
我试用了您的代码,它在我的笔记本电脑上运行良好。如果我运行:curl -X POST http://127.0.0.1:8080/predictions/ssd -F "data=[0 1 2 3 4]"
,我得到:OK%
我只能猜测为什么它在您的机器上不起作用:
请注意,
model-store
参数应该用-
not with来写,_
就像在你的例子中一样。我运行 mxnet-model-server 的命令如下所示:mxnet-model-server --start --model-store ./ --models ssd=my_project.mar
您使用哪个版本的 mxnet-model-server?最新的是 1.0.2,但是我安装的是 1.0.1,所以也许你想降级并尝试一下:
pip install mxnet-model-server==1.0.1
.与 MXNet 版本相同的问题。就我而言,我使用通过
pip install mxnet --pre
. 我看到你的模型非常基本,所以它不应该依赖太多......不过,安装 1.4.0(当前的)以防万一。
不确定,但希望对您有所帮助。
推荐阅读
- javascript - Jest 模拟不适用于 react-native 输入
- javascript - Vue JS:同时获取输入数组的唯一ID和值
- uwp - 如何使用 UWP Backbround Task 从本地存储(比如 "C:\\" )读取文件?
- javascript - React 中通过 ID 访问 DOM 元素
- flutter - Flutter 实现重复 Elastic 动画
- python - 如何从列表中创建嵌套列表,其中值是列表项的计数 Python
- polymer - 我应该使用属性或属性来传递数据吗?
- javascript - 如何使用 java 脚本仅从具有带扩展名的文件名的路径中修剪文件名?
- ios - 如何使用 SceneKit iOS 将背面图像添加为纹理
- ios - Swift:在 2D 中旋转点