首页 > 解决方案 > 当有多个规范时,优化 Pandas 计算的良好做法是什么?

问题描述

当我做很多规范的查询时,我经常会遇到一个问题,如何加快这个过程?

基本上我真的经常使用该apply函数来获得结果,但很多时候,计算需要很长时间

是否有一个很好的做法来找到如何优化 Pandas 代码?

这是一个示例,我有一个 DataFrame 表示包含 3 列的聊天交换:

目标是找出在 5 分钟内得到响应的消息的比例。这是我的代码:

import pandas as pd
import numpy as np
import datetime

size_df = 30000
np.random.seed(42)

data = {
    'timestamp': pd.date_range('2019-03-01', periods=size_df, freq='30S').astype(int),
    'sender_id': np.random.randint(5, size=size_df),
    'receiver_id': np.random.randint(5, size=size_df)
}

dataframe = pd.DataFrame(data)

这就是 DataFrame 的样子:

print(dataframe.head().to_string())
              timestamp  sender_id  receiver_id
0   1551398400000000000          4            2
1   1551398430000000000          3            2
2   1551398460000000000          1            1
3   1551398490000000000          4            3
4   1551398520000000000          4            3

apply使用的函数:

def apply_find_next_answer_within_5_min(row):
    """
        Find the index of the next response in a range of 5 minutes
    """
    [timestamp, sender, receiver] = row
    ## find the next responses from receiver to sender in the next 5 minutes 
    next_responses = df_groups.get_group((receiver, sender))["timestamp"]\
                        .loc[lambda x: (x > timestamp) & (x < timestamp + 5 * 60 * 1000 * 1000 * 1000)]
    ## if there is no next responses just return NaN
    if not next_responses.size:
        return np.nan
    ## find the next messages from sender to receiver in the next 5 minutes 
    next_messages = df_groups.get_group((sender, receiver))["timestamp"]\
            .loc[lambda x: (x > timestamp) & (x < timestamp + 5 * 60 * 1000 * 1000 * 1000)]

    ## if the first next message is before next response return nan else return index next reponse
    return np.nan if next_messages.size and next_messages.iloc[0] < next_responses.iloc[0] else next_responses.index[0]

%%timeit
df_messages = dataframe.copy()
## create a dataframe to easily find messages from a specific sender and receiver, speed up the querying process for these messages.
df_groups = df_messages.groupby(["sender_id", "receiver_id"])
df_messages["next_message"] = df_messages.apply(lambda row: apply_find_next_answer_within_5_min(row), axis=1)

输出timeit

42 s ± 2.16 s per loop (mean ± std. dev. of 7 runs, 1 loop each)

因此需要42 seconds将函数应用于30 000 rowsDataFrame。我认为它很长,但我找不到提高效率的方法。我已经40 seconds通过使用将发送者和接收者分组的中间数据帧而不是在应用函数中查询大数据帧而获得了收益。

这将对此特定问题的响应:

1 - df_messages.next_message[lambda x: pd.isnull(x)].size / df_messages.next_message.size
0.2753

那么在这种情况下,你如何找到一种更有效地计算的方法呢?有什么技巧要考虑吗?

在这个例子中,我不相信可以一直使用矢量化,但也许通过使用更多的组,可以更快地进行吗?

标签: pythonpandasdataframeoptimizationapply

解决方案


您可以尝试对数据框进行分组

groups = dataframe.reset_index()\ #I reset_index for later to get the value
                  .groupby([ frozenset([se, re]) #need frosenset to allow the groupby
                             for se, re in dataframe[['sender_id', 'receiver_id']].values])

现在您可以创建满足您条件的布尔掩码

mask_1 = (  # within a group, check if the following message is sent from the other one
            (groups.sender_id.diff(-1).ne(0) 
            # or if the person talks to oneself 
            | dataframe.sender_id.eq(dataframe.receiver_id) ) 
            # and check if the following message is within 5 min
            & groups.timestamp.diff(-1).gt(-5*60*1000*1000*1000))

现在使用掩码创建具有您要查找的索引的列并在索引上移动:

df_messages.loc[mask_1, 'next_message'] = groups['index'].shift(-1)[mask_1]

你会喜欢你的方法,应该更快:

print (df_messages.head(20))
              timestamp  sender_id  receiver_id  next_message
0   1551398400000000000          3            1           NaN
1   1551398430000000000          4            1           NaN
2   1551398460000000000          2            3           NaN
3   1551398490000000000          4            1           NaN
4   1551398520000000000          4            3           NaN
5   1551398550000000000          1            1           NaN
6   1551398580000000000          2            3          10.0
7   1551398610000000000          2            4           NaN
8   1551398640000000000          2            4           NaN
9   1551398670000000000          4            1           NaN
10  1551398700000000000          3            2           NaN
11  1551398730000000000          2            4           NaN
12  1551398760000000000          4            0          18.0
13  1551398790000000000          1            0           NaN
14  1551398820000000000          3            3          16.0
15  1551398850000000000          1            2           NaN
16  1551398880000000000          3            3           NaN
17  1551398910000000000          4            1           NaN
18  1551398940000000000          0            4           NaN
19  1551398970000000000          3            2           NaN

推荐阅读