首页 > 解决方案 > EmptyDataError:从 S3 存储桶读取多个 csv 文件到 pandas Dataframe 时,没有要从文件解析的列

问题描述

我有一个包含大约 500 个 csv 文件的源 s3 存储桶,我想将这些文件移动到另一个 s3 存储桶,并且在移动之前我想清理数据,所以我试图将其读取到 pandas 数据帧。我的代码工作正常并返回几个文件的数据帧,然后它突然中断并给我错误 " EmptyDataError: No columns to parse from file " 。

sts_client = boto3.client('sts', region_name='us-east-1')
client = boto3.client('s3')

bucket = 'source bucket'
folder_path = 'mypath'

def get_keys(bucket,folder_path):
    keys = []
    resp = client.list_objects(Bucket=bucket, Prefix=folder_path)
    for obj in resp['Contents']:
        keys.append(obj['Key'])
    return keys

files = get_keys(bucket,folder_path)
print(files)

for file in files:
    f = BytesIO()
    client.download_fileobj(bucket, file, f)
    f.seek(0)
    obj = f.getvalue()
    my_df = pd.read_csv(f ,header=None, escapechar='\\', encoding='utf-8', engine='python')
    # files dont have column names, providing column names
    my_df.columns = ['col1', 'col2','col3','col4','col5']
    print(my_df.head())

提前致谢!

标签: pythonpandasamazon-web-servicescsvamazon-s3

解决方案


您的文件大小为零。代替 os.path.getsize(file) 使用分页器检查如下:

import boto3

client = boto3.client('s3', region_name='us-west-2')
paginator = client.get_paginator('list_objects')
page_iterator = paginator.paginate(Bucket='my-bucket')
filtered_iterator = page_iterator.search("Contents[?Size > `0`][]")
for key_data in filtered_iterator:
    print(key_data)

推荐阅读