首页 > 解决方案 > 优化/并行计算基于 Pandas 的简单但大循环

问题描述

我有一个处理大数据集的简单循环。

for i in range (len(nbi.LONG_017)):
    StoredOCC = []
    for j in range (len(Final.X)):
        r = haversine(nbi.LONG_017[i], nbi.LAT_016[i], Final.X[j], Final.Y[j])
        if (r < 0.03048):
            SWw = Final.CUM_OCC[j]
            StoredOCC.append(SWw) 
    if len(StoredOCC) != 0:
        nbi.loc[i, 'ADTT_02_2019'] = np.max(StoredOCC)

len(nbi.LONG_017)是 3000,len(Final.X)是 600 万个数据点。

我想知道是否有一种有效的方法来实现此代码或使用并行计算使其更快?

我使用了此处提供的代码: Python中的Haversine公式(两个GPS点之间的方位角和距离)用于我的函数haversine

def haversine(lon1, lat1, lon2, lat2):
   """
   Calculate the great circle distance between two points 
   on the earth (specified in decimal degrees)
   """
   # convert decimal degrees to radians 
   lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])

   # haversine formula 
   dlon = lon2 - lon1 
   dlat = lat2 - lat1 
   a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
   c = 2 * asin(sqrt(a)) 
   r = 6372.8 # Radius of earth in kilometers. Use 3956 for miles
   return r * c

标签: pythonpandasparallel-processing

解决方案


在谈论并行化之前,您可以着手优化您的循环。第一种方法是迭代数据而不是在长度上递增值,然后每次访问数据:

#toy sample
np.random.seed(1)
size_nbi = 20
size_Final = 100

nbi = pd.DataFrame({'LONG_017':np.random.random(size=size_nbi)/100+73, 
                    'LAT_016':np.random.random(size=size_nbi)/100+73,})

Final = pd.DataFrame({'X':np.random.random(size=size_Final)/100+73,
                      'Y':np.random.random(size=size_Final)/100+73, 
                      'CUM_OCC':np.random.randint(size_Final,size=size_Final)})

使用您的方法,您可以获得大约 75 毫秒的这些数据帧大小:

%%timeit
for i in range (len(nbi.LONG_017)):
    StoredOCC = []
    for j in range (len(Final.X)):
        r = haversine(nbi.LONG_017[i], nbi.LAT_016[i], Final.X[j], Final.Y[j])
        if (r < 0.03048):
            SWw = Final.CUM_OCC[j]
            StoredOCC.append(SWw) 
    if len(StoredOCC) != 0:
        nbi.loc[i, 'ADTT_02_2019'] = np.max(StoredOCC)
# 75.6 ms ± 4.05 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

现在,如果您稍微更改循环,迭代数据本身并使用列表作为结果,然后将结果作为循环之外的列,您可以降低到 5 毫秒,因此只需优化它 15 倍(您可能找到更好的优化):

%%timeit
res_ADIT = []
for lon1, lat1 in zip (nbi.LONG_017.to_numpy(),
                       nbi.LAT_016.to_numpy()):
    StoredOCC = []
    for lon2, lat2, SWw in zip(Final.X.to_numpy(), 
                               Final.Y.to_numpy(), 
                               Final.CUM_OCC.to_numpy()):
        r = haversine(lon1, lat1, lon2, lat2)
        if (r < 0.03048):
            StoredOCC.append(SWw) 
    if len(StoredOCC) != 0:
        res_ADIT.append(np.max(StoredOCC))
    else:
        res_ADIT.append(np.nan)
nbi['ADIT_v2'] = res_ADIT
# 5.23 ms ± 305 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

现在,您甚至可以更进一步,使用numpy和矢量化您的第二个循环,还可以直接传递弧度值,而不必map在每个循环中传递它们:

# empty list for result
res_ADIT = []

# work with array in radians
arr_lon2 = np.radians(Final.X.to_numpy())
arr_lat2 = np.radians(Final.Y.to_numpy())
arr_OCC = Final.CUM_OCC.to_numpy()

for lon1, lat1 in zip (np.radians(nbi.LONG_017), #pass directly the radians
                       np.radians(nbi.LAT_016)):
    
    # do all the substraction in a vectorize way
    arr_dlon = arr_lon2 - lon1
    arr_dlat = arr_lat2 - lat1 

    # same here using numpy functions
    arr_dist = np.sin(arr_dlat/2)**2 + np.cos(lat1) * np.cos(arr_lat2) * np.sin(arr_dlon/2)**2
    arr_dist = 2 * np.arcsin(np.sqrt(arr_dist)) 
    arr_dist *= 6372.8

    # extract the values of CUM_OCC that meet the criteria
    r = arr_OCC[arr_dist<0.03048]
    
    # check that at least one element
    if r.size>0:
        res_ADIT.append(max(r))
    else :
        res_ADIT.append(np.nan)

nbi['AUDIT_np'] = res_ADIT

如果你这样做timeit,你会819 µs ± 15.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)比原来的小数据帧解决方案快大约 90 倍,并且所有结果都是相同的

print(nbi)
     LONG_017    LAT_016  ADTT_02_2019  ADIT_v2  AUDIT_np
0   73.004170  73.008007           NaN      NaN       NaN
1   73.007203  73.009683          30.0     30.0      30.0
2   73.000001  73.003134          14.0     14.0      14.0
3   73.003023  73.006923          82.0     82.0      82.0
4   73.001468  73.008764           NaN      NaN       NaN
5   73.000923  73.008946           NaN      NaN       NaN
6   73.001863  73.000850           NaN      NaN       NaN
7   73.003456  73.000391           NaN      NaN       NaN
8   73.003968  73.001698          21.0     21.0      21.0
9   73.005388  73.008781           NaN      NaN       NaN
10  73.004192  73.000983          93.0     93.0      93.0
11  73.006852  73.004211           NaN      NaN       NaN

您可以稍微玩一下代码并增加每个玩具数据帧的大小,您将看到如何通过增加数据最终的大小,特别是数据最终的大小,增益变得有趣,例如使用size_nbi = 20and size_Final = 1000,您将获得 400 倍的增益与矢量化解决方案。等等。对于您的完整数据大小(3K * 6M),您仍然需要一些时间,在我的计算机上,我估计大约需要 25 分钟,而您的解决方案需要数百小时。现在,如果这还不够,您可以考虑并行化或也使用numba


推荐阅读