首页 > 解决方案 > pool.apply_async 只有一个进程

问题描述

我写了一个数据清理的代码。操作函数是“single_worker”,如果第一列大于截止点,则将值从“a”分配到“l”。由于整个数据集非常大,我将它分成 8 个子集用于这种多处理方法。但是,当我在 cmd 中运行此代码时,它实际上只有一个进程在子集 1 上工作,当该进程完成时,子集 2 没有进一步的进程。

我的 python 版本是 3.7,包含所有 anaconda 包。电脑系统为win10,12核。

这是功能:

    def single_worker(merge, beme, me,a):
        c = 0
        year_1 = 0
        month_1 = 0
        length = len(merge)
        for i in trange(length):
        c = c + 1
        year = merge['year'][i]
        month = merge['month'][i]
        if (year != year_1) | (month != month_1):
            key_beme = beme[beme.year == year].index.tolist()
            k=key_beme[0]
            beme6 = beme['6'][k]
            beme14 = beme['14'][k]

            key_me = me[(me.year==year)&(me.month==month)].index.tolist()
            b=key_me[0]
            me10 = me['10'][b]

            year_1 = year
            month_1 = month


        if ~(merge['me'][i] > 0):
            merge['bs_new'][i] = np.nan
            continue
        if merge['me'][i] <= me10:
            merge['bs_new'][i] = 's'
        else:
            merge['bs_new'][i] = 'b'

        if ~(merge['bm'][i] > 0):
            merge['hl_new'][i] = np.nan
            continue
        if merge['bm'][i] <= beme6:
            merge['hl_new'][i] = 'l'
        elif merge['bm'][i] >= beme14:
            merge['hl_new'][i] = 'h'
        else:
            merge['hl_new'][i] = 'm'
        name = str(a)+".csv"
        merge.to_csv(name)

这是代码:

    import pandas as pd
    import numpy as np


    from tqdm import trange                  
    from multiprocessing import cpu_count    
    from multiprocessing import Pool
    from single_worker import single_worker

    merge = pd.read_csv('merge.csv')
    beme = pd.read_csv('beme.csv')
    me = pd.read_csv('me.csv')
    Len_imgs = len(merge)  
    num_cores = cpu_count()  


    if num_cores >= 8:  
    num_cores = 8
    subset1 = merge[:Len_imgs // 8]
    subset2 = merge[Len_imgs // 8: Len_imgs // 4]
    subset3 = merge[Len_imgs // 4: (Len_imgs * 3) // 8]
    subset4 = merge[(Len_imgs * 3) // 8: Len_imgs // 2]
    subset5 = merge[Len_imgs // 2: (Len_imgs * 5) // 8]
    subset6 = merge[(Len_imgs * 5) // 8: (Len_imgs * 6) // 8]
    subset7 = merge[(Len_imgs * 6) // 8: (Len_imgs * 7) // 8]
    subset8 = merge[(Len_imgs * 7) // 8:]

    List_subsets = [subset1, subset2, subset3, subset4,
                    subset5, subset6, subset7, subset8]
    print("Finish separating subsets")
    p = Pool(num_cores)

    k=0

    for i in range(num_cores):
        k=k+1
        p.apply_async(single_worker, (List_subsets[i], beme, me,k,))

        print(k)
    p.close()
    p.join()

顺便说一句,当它运行时,我检查了任务管理器。CPU 利用率一直低于 25%。不知道是不是我的代码有问题。请看一下。谢谢你的时间。

标签: python-3.xdataframepython-multiprocessing

解决方案


我试图不在我定义的函数中使用 trange ,它可以工作。这似乎很奇怪,我不知道原因。


推荐阅读