首页 > 解决方案 > 使用tensorflow的flask应用的响应时间,随着请求的数量呈指数增长

问题描述

我正在使用烧瓶(0.12.2)设置一个 python 应用程序。在这个应用程序中,我们使用的是 tensorflow(1.12.0) NN。

我的问题是我从我的 API 请求的越多,它得到的响应就越慢。看起来它正在超载。

您可以检查我的脚本,我在其中部署了我的应用程序。

import tensorflow as tf
import numpy as np
from gevent.pywsgi import WSGIServer
from flask import Flask, jsonify,request

app =Flask(__name__)
sess = tf.Session()





parameters={
    'W1': np.array([[ 0.5211617 , -0.16149558],[ 0.87381583,  0.67057973]], dtype="float64").reshape(2,2),
    'W2': np.array([2.3268008, 2.1230187], dtype="float64").reshape(1,2),
    'b1': np.array([[-0.3840024 ],[-0.06019319]], dtype="float64").reshape(2,1),
    'b2': np.array([-4.062307], dtype="float64").reshape(1,1)
}

parameters["W1"]=tf.convert_to_tensor(parameters["W1"])
parameters["b1"] = tf.convert_to_tensor(parameters["b1"])
parameters["W2"] = tf.convert_to_tensor(parameters["W2"])
parameters["b2"] = tf.convert_to_tensor(parameters["b2"])
x = tf.placeholder("float64", [2, None])




def forward_propagation_for_predict(X):
    cur_activations = X
    cur_activations = tf.nn.relu(tf.add(tf.matmul(parameters["W1" ], cur_activations), parameters["b1"]))
    output_activations  = tf.math.sigmoid(tf.add(tf.matmul(parameters["W2"], cur_activations), parameters["b2"]))
    return output_activations


def predict(X):
    output_activations = forward_propagation_for_predict(x)
    prediction = sess.run(output_activations, feed_dict={x: X})[0][0]
    return prediction



@app.route('/model', methods=['GET','POST'])
def serve_utils():
    result = {}
    if request.method == 'POST':
        content = request.json
        prediction = predict(np.array([content['x1'],content['x2']], dtype="float64").reshape(2,1))
        result['prediction']=str(prediction)[:5]
    return jsonify(result)



if __name__=="__main__":
    http_server = WSGIServer(('', 9096), app)
    http_server.serve_forever()

然后我多次向我的应用程序请求,并使用以下代码打印出响应时间。

import requests
import json

url = "http://localhost:9096/model"
import datetime

def request_my_local_API(x1,x2):
    return requests.post(url, headers={"Content-Type": "application/json", "Accept": "application/json"}, verify=False,
                  data=json.dumps({"x1":x1,"x2":x2})).json()



for i in range(2000):
    t0=datetime.datetime.now()
    prediction = request_my_local_API(i*3, i*4)
    if i%100==0:
        print('Iteration:'+str(i)+'| Time for response: '+str(datetime.datetime.now()-t0)+' | Current prediction: '+prediction['prediction'])

该脚本打印出以下内容:

Iteration:0| Time for response: 0:00:00.020178 | Current prediction: 0.016
Iteration:100| Time for response: 0:00:00.017582 | Current prediction: 1.0
Iteration:200| Time for response: 0:00:00.024748 | Current prediction: 1.0
Iteration:300| Time for response: 0:00:00.033445 | Current prediction: 1.0
Iteration:400| Time for response: 0:00:00.040043 | Current prediction: 1.0
Iteration:500| Time for response: 0:00:00.048611 | Current prediction: 1.0
Iteration:600| Time for response: 0:00:00.102753 | Current prediction: 1.0
Iteration:700| Time for response: 0:00:00.063461 | Current prediction: 1.0
Iteration:800| Time for response: 0:00:00.075354 | Current prediction: 1.0
Iteration:900| Time for response: 0:00:00.080214 | Current prediction: 1.0
Iteration:1000| Time for response: 0:00:00.092557 | Current prediction: 1.0
Iteration:1100| Time for response: 0:00:00.102275 | Current prediction: 1.0
Iteration:1200| Time for response: 0:00:00.110713 | Current prediction: 1.0
Iteration:1300| Time for response: 0:00:00.126928 | Current prediction: 1.0
Iteration:1400| Time for response: 0:00:00.135294 | Current prediction: 1.0
Iteration:1500| Time for response: 0:00:00.139847 | Current prediction: 1.0
Iteration:1600| Time for response: 0:00:00.151268 | Current prediction: 1.0
Iteration:1700| Time for response: 0:00:00.154732 | Current prediction: 1.0
Iteration:1800| Time for response: 0:00:00.161457 | Current prediction: 1.0
Iteration:1900| Time for response: 0:00:00.182295 | Current prediction: 1.0
Iteration:2000| Time for response: 0:00:00.182100 | Current prediction: 1.0
Iteration:2100| Time for response: 0:00:00.191160 | Current prediction: 1.0
Iteration:2200| Time for response: 0:00:00.211021 | Current prediction: 1.0
Iteration:2300| Time for response: 0:00:00.248748 | Current prediction: 1.0
Iteration:2400| Time for response: 0:00:00.220034 | Current prediction: 1.0
Iteration:2500| Time for response: 0:00:00.250308 | Current prediction: 1.0
Iteration:2600| Time for response: 0:00:00.274345 | Current prediction: 1.0
Iteration:2700| Time for response: 0:00:00.252312 | Current prediction: 1.0
Iteration:2800| Time for response: 0:00:00.314059 | Current prediction: 1.0

在我定义了一次操作并且只调用 session.run 来预测值之后,这个问题就解决了。

于是我修改了一行代码:

import tensorflow as tf
import numpy as np
from gevent.pywsgi import WSGIServer
from flask import Flask, jsonify,request

app =Flask(__name__)
sess = tf.Session()





parameters={
    'W1': np.array([[ 0.5211617 , -0.16149558],[ 0.87381583,  0.67057973]], dtype="float64").reshape(2,2),
    'W2': np.array([2.3268008, 2.1230187], dtype="float64").reshape(1,2),
    'b1': np.array([[-0.3840024 ],[-0.06019319]], dtype="float64").reshape(2,1),
    'b2': np.array([-4.062307], dtype="float64").reshape(1,1)
}

parameters["W1"]=tf.convert_to_tensor(parameters["W1"])
parameters["b1"] = tf.convert_to_tensor(parameters["b1"])
parameters["W2"] = tf.convert_to_tensor(parameters["W2"])
parameters["b2"] = tf.convert_to_tensor(parameters["b2"])
x = tf.placeholder("float64", [2, None])




def forward_propagation_for_predict(X):
    cur_activations = X
    cur_activations = tf.nn.relu(tf.add(tf.matmul(parameters["W1" ], cur_activations), parameters["b1"]))
    output_activations  = tf.math.sigmoid(tf.add(tf.matmul(parameters["W2"], cur_activations), parameters["b2"]))
    return output_activations

output_activations = forward_propagation_for_predict(x)

def predict(X):

    prediction = sess.run(output_activations, feed_dict={x: X})[0][0]
    return prediction



@app.route('/model', methods=['GET','POST'])
def serve_utils():
    result = {}
    if request.method == 'POST':
        content = request.json
        prediction = predict(np.array([content['x1'],content['x2']], dtype="float64").reshape(2,1))
        result['prediction']=str(prediction)[:5]
    return jsonify(result)



if __name__=="__main__":
    http_server = WSGIServer(('', 9096), app)
    http_server.serve_forever()

然后我多次从我的应用程序请求响应时间每次几乎相同!

标签: pythontensorflow

解决方案


问题是每次调用时forward_propagation_for_predict,都会向图中添加新操作,这会导致速度变慢。图无限制地增长,这是有问题的,但也是不必要的。

您应该定义一次操作,并且只调用session.run预测值。


推荐阅读