首页 > 解决方案 > 在遍历它时从熊猫数据框中删除行

问题描述

我有以下 python 脚本。在其中,我正在遍历一个 CSV 文件,该文件包含一排排会员卡。在许多情况下,每张卡有多个条目。我目前正在遍历每一行,然后使用 loc 在当前行中查找卡的所有其他实例,因此我可以将它们组合在一起以发布到 API。然而,我想做的是,当该帖子完成后,删除我刚刚合并的所有行,这样迭代就不会再次命中它们。

这就是我坚持的部分。有任何想法吗?本质上,我想在进行下一次迭代之前从 csv 中删除 card_list 中的所有行。这样即使可能有 5 行具有相同的卡号,我只处理该卡一次。我尝试使用

csv = csv[csv.card != row.card]

在循环结束时,认为它可能会重新生成没有任何行的数据帧,其中的卡与刚刚处理的卡匹配,但它不起作用。

import urllib3
import json
import pandas as pd 
import os
import time 
import pyfiglet
from datetime import datetime
import array as arr

    for row in csv.itertuples():
        dt = datetime.now()
        vouchers = []
        if minutePassed(time.gmtime(lastrun)[4]):
            print('Getting new token...')
            token = get_new_token()
            lastrun = time.time()
        print('processing ' + str(int(row.card)))
        card_list = csv.loc[csv['card'] == int(row.card)]
        print('found ' + str(len(card_list)) + ' vouchers against this card')

        for row in card_list.itertuples():
            print('appending card ' + str(int(row.card)) + ' voucher ' + str(row.voucher))
            vouchers.append(row.voucher)
        print('vouchers, ', vouchers)

        encoded_data = json.dumps({
            "store_id":row.store,
            "transaction_id":"11111",
            "card_number":int(row.card),
            "voucher_instance_ids":vouchers
        })
        print(encoded_data)
        number += 1

        r = http.request('POST', lcs_base_path + 'customer/auth/redeem-commit',body=encoded_data,headers={'x-api-key': api_key, 'Authorization': 'Bearer ' + token})
        response_data = json.loads(r.data)

        if (r.status == 200):
            print (str(dt) + ' ' + str(number) + ' done. processing card:' + str(int(row.card)) + ' voucher:' + str(row.voucher) + ' store:' + str(row.store) + ' status: ' + response_data['response_message'] + ' request:' + response_data['lcs_request_id'])
        else:
            print (str(dt) + ' ' + str(number) +  'done. failed to commit ' + str(int(row.card)) + ' voucher:' + str(row.voucher) + ' store:' + str(row.store) + ' status: ' + response_data['message'])
            new_row = {'card':row.card, 'voucher':row.voucher, 'store':row.store, 'error':response_data['message']}
            failed_csv = failed_csv.append(new_row, ignore_index=True)
            failed_csv.to_csv(failed_csv_file, index=False)
            csv = csv[csv.card != row.card]
    print ('script completed')
    print (str(len(failed_csv)) + ' failed vouchers will be saved to failed_commits.csv')
    print("--- %s seconds ---" % (time.time() - start_time))

标签: pythonpandasdataframe

解决方案


第一条经验法则永远不会替代您正在迭代的内容。另外,我认为你做错了itertuples. 让我们进行分组:

for card, card_list in csv.groupby('card'):
    # card_list now contains all the rows that have a specific cards
    # exactly like `card_list` in your code
    print('processing, card)
    print('found', len(card_list), 'vouchers against this card')

    # again `itertuples` is over killed -- REMOVE IT
    # for row in card_list.itertuples():

    encoded_data = json.dumps({
            "store_id": card_list['store'].iloc[0],      # same as `row.store`
            "transaction_id":"11111",
            "card_number":int(card),
            "voucher_instance_ids": list(card_list['voucher']) # same as `vouchers`
        })
    
    # ... Other codes

推荐阅读