首页 > 解决方案 > 即使在调用 pool.close 和 pool.join 之后,Python 多处理高内存使用率

问题描述

我必须对代码进行并行化,从参数文件中读取一行,执行一些并行化的工作,然后读取下一行,直到文件结束。我做了这件事:

def unpack(func):
    @wraps(func)
    def wrapper(arg_tuple):
        return func(*arg_tuple)
    return wrapper

@unpack
def parallel_job(seed,distributioncsv,shift):
    #for each core, create a different file, use different seeds and start
    f = open(distributioncsv,'w+')
    random.seed(seed)
    np.random.seed(seed)
    #number of simulation each core should make
    threadsim = simnum/threadnum
    for i in range (0,threadsim):
          ...do stuff

我的主要是这样的:我读取文件,循环遍历行并调用多处理。首先我定义了一些常量:

if __name__ == '__main__':
    #number of simulations, and number of threads to use
    threadnum = 10

    simnum = threadnum*10

    #order in file: Network, N, lambda, gamma, k, i0, tauf, folder
    N_f, lamma_f, gamma_f,k_f, i0_f, tauf_f = np.loadtxt("parameters.txt", delimiter=',', dtype = np.float, usecols =[1,2,3,4,5,6], unpack = True)
    folder_f, networkchoice_f =   np.loadtxt("parameters.txt", delimiter=',', dtype = np.str, usecols =[7,0], unpack = True)

    for i in range(0,len(N_f)):
        #number of nodes
        N = N_f[i]
        #per node infection probability 
        lamma = lamma_f[i]
        #per node recovery probability
        gamma = gamma_f[i]
        #average network degree or number of new links per node
        k = int(k_f[i])
        #initial number of infected nodes
        i0 = int(i0_f[i])
        #tauend of simulations
        tauf = tauf_f[i]
        #folder where to save files
        folder =  os.getenv("HOME")+folder_f[i]
        #Network to simulate
        networkchoice = networkchoice_f[i]

        #where to put the sum of all the distributions
        distributioncsv = folder +"/distribution.csv"

        #where to put all the figures
        destinationofallfigures = folder+"/Figures/a(k)/"
        #file for the k - E(k) values
        akfile = folder+'/csv/E(ak).csv'
        #plot the mean epidemics from simulations (t, I)
        avgepidemics = folder+"/Figures/I(t)/average"
        #columns name
        name = ['I', 'SI', 'deltas','t', 'run']
        saveplots = folder+"/Figures/"
        #file for the mean average
        averagecsv = folder+"/csv/average"

        #different seed for each thread
        seed = [j*2759 + 37*j**2 + 4757 for j in range(threadnum)]
        #to enumerate my runs without loosing track of them   
        shift=[j*simnum for j in range(simnum)]
        #vector with the name of the files to be created
        distribution = [folder+"/csv/distribution_%d.csv" %j for j in range(threadnum)]

这是有关并行化的相关部分

        arguments = zip(seed,distribution,shift)

        #print arguments


        #begin parallelization

        pool = multiprocessing.Pool(threadnum)

        #spawn threadnum threads and give them parallel jobs
        pool.map(parallel_job, iterable=arguments)

        pool.close()
        # close the parallelization waiting for all the thread to be done
        pool.join()
        ... do other unparallelized stuff and end the loop

每次循环结束时,我预计我的内存使用量会减少,因为在某些时候会调用 pool.close() 和 pool.join()。

相反,它会发生一个又一个循环,内存使用量不断增加。

可能是因为我的 parallel_job 函数没有返回值吗?我应该在 parallel_job 结束时返回 None 吗?目前我没有返回任何东西。

编辑:我现在正在测量 ram 使用量的增加。不幸的是,这个过程需要很长时间。上次我启动此过程时,4 小时后它消耗了我电脑的所有可用内存和交换 (30 GB)。

如果我启动这个程序的无与伦比的版本,每个循环都会消耗大约 3 GB 的 RAM。

标签: pythonmemory-managementgarbage-collectionmultiprocessing

解决方案


推荐阅读