首页 > 解决方案 > Python多处理需要更长的时间

问题描述

我正在尝试使用 python多处理模块来减少过滤代码的时间。一开始我做了一些实验。结果并不乐观。

我已经定义了一个函数来在一定范围内运行循环。然后我在有和没有线程的情况下运行了这个函数并测量了时间。这是我的代码:

import time
from multiprocessing.pool import ThreadPool

def do_loop(i,j):
    l = []
    for i in range(i,j):
        l.append(i)
    return l

#loop veriable
x = 7

#without thredding
start_time = time.time()
c = do_loop(0,10**x)
print("--- %s seconds ---" % (time.time() - start_time))

#with thredding
def thread_work(n):
    #dividing loop size
    a = 0
    b = int(n/2)
    c = int(n/2)
    #multiprocessing
    pool = ThreadPool(processes=10)
    async_result1 = pool.apply_async(do_loop, (a,b))
    async_result2 = pool.apply_async(do_loop, (b,c))
    async_result3 = pool.apply_async(do_loop, (c,n))
    #get the result from all processes]
    result = async_result1.get() + async_result2.get() + async_result3.get()

    return result

start_time = time.time()
ll = thread_work(10**x)
print("--- %s seconds ---" % (time.time() - start_time))

对于 x=7,结果是:

--- 1.0931916236877441 seconds ---
--- 1.4213247299194336 seconds ---

没有线程,它需要更少的时间。这是另一个问题。对于 X=8,大多数时候我都会得到MemoryError进行线程处理。一旦我得到这个结果:

--- 17.04124426841736 seconds ---
--- 32.871358156204224 seconds ---

该解决方案很重要,因为我需要优化一个需要 6 小时的过滤任务。

标签: pythonmultithreadingmultiprocessing

解决方案


根据您的任务,多处理可能需要也可能不需要更长的时间。如果您想利用 CPU 内核并加快过滤过程,那么您应该使用 multiprocessing.Pool

提供了一种方便的方法,可以跨多个输入值并行执行函数,跨进程分布输入数据(数据并行性)。

我一直在创建数据过滤的示例,然后我一直在测量简单方法的时间和多进程方法的时间。(从您的代码开始)

# take only the sentences that ends in "we are what we dream",  the second word is "are"


import time
from multiprocessing.pool import Pool

LEN_FILTER_SENTENCE = len('we are what we dream')
num_process = 10

def do_loop(sentences):
    l = []
    for sentence in sentences:
        if sentence[-LEN_FILTER_SENTENCE:].lower() =='we are what we doing' and sentence.split()[1] == 'are':     
            l.append(sentence)
    return l

#with thredding
def thread_work(sentences):
    #multiprocessing

    pool = Pool(processes=num_process)
    pool_food = (sentences[i: i + num_process] for i in range(0, len(sentences), num_process))
    result = pool.map(do_loop, pool_food)
    return result

def test(data_size=5, sentence_size=100):
    to_be_filtered = ['we are what we doing'*sentence_size] * 10 ** data_size + ['we are what we dream'*sentence_size] * 10 ** data_size

    start_time = time.time()
    c = do_loop(to_be_filtered)
    simple_time = (time.time() - start_time)



    start_time = time.time()
    ll = [e for l in thread_work(to_be_filtered) for e in l]
    multiprocessing_time = (time.time() - start_time)
    assert c == ll 
    return simple_time, multiprocessing_time

data_size 表示数据的长度,而 sentence_size 是每个数据元素的乘法因子,您可以看到 sentence_size 与数据中每个项目请求的 CPU 操作数成正比。

data_size = [1, 2, 3, 4, 5, 6]
results = {i: {'simple_time': [], 'multiprocessing_time': []} for i in data_size}
sentence_size = list(range(1, 500, 100))
for size in data_size:
    for s_size in sentence_size:
        simple_time, multiprocessing_time = test(size, s_size)
        results[size]['simple_time'].append(simple_time)
        results[size]['multiprocessing_time'].append(multiprocessing_time)

import pandas as pd

df_small_data = pd.DataFrame({'simple_data_size_1': results[1]['simple_time'],
                   'simple_data_size_2': results[2]['simple_time'],
                   'simple_data_size_3': results[3]['simple_time'],
                   'multiprocessing_data_size_1': results[1]['multiprocessing_time'],
                   'multiprocessing_data_size_2': results[2]['multiprocessing_time'],
                   'multiprocessing_data_size_3': results[3]['multiprocessing_time'],

                   'sentence_size': sentence_size})

df_big_data = pd.DataFrame({'simple_data_size_4': results[4]['simple_time'],
                   'simple_data_size_5': results[5]['simple_time'],
                   'simple_data_size_6': results[6]['simple_time'],
                   'multiprocessing_data_size_4': results[4]['multiprocessing_time'],
                   'multiprocessing_data_size_5': results[5]['multiprocessing_time'],
                   'multiprocessing_data_size_6': results[6]['multiprocessing_time'],

                   'sentence_size': sentence_size})

绘制小数据的时序:

ax = df_small_data.set_index('sentence_size').plot(figsize=(20, 10), title = 'Simple vs multiprocessing approach for small data')
ax.set_ylabel('Time in seconds')

在此处输入图像描述

绘制大数据(相对大数据)的时序: 在此处输入图像描述

如您所见,当您拥有需要为每个数据元素提供相对大量 CPU 能力的大数据时,多处理能力就会显现出来。


推荐阅读