首页 > 解决方案 > 将 Parquet 文件分区列存储在不同的文件中

问题描述

我想以镶木地板格式存储表格数据集,对不同的列组使用不同的文件。是否可以按列对镶木地板文件进行分区?如果是这样,是否可以使用python(pyarrow)来做到这一点?

我有一个大型数据集,它收集许多对象(行)的属性/特征(列)。这些行大约为 100k-1M(行会随着时间的推移而增长)。相反,列在逻辑上分为 200 个组,每组有 200-1000 列。列的总数是固定的,但是它们的数据是从 col group 1, col group 2, ... 开始顺序获取的。但是,在接收到第一个数据批对该列组进行排序之前,事先不知道列名、类型和编号。

这些数据将随着时间的推移而收集。当数据到达时,我想将这组不断增长的列存储在镶木地板中。最终,所有列组都将填充数据。随着时间的推移,新对象(行)将到达,它们的数据将始终从 col 组 1 开始,并逐渐填充其他组。

是否可以(或建议)将这些数据存储在一个逻辑 parquet 文件中,该文件拆分为文件系统上的多个文件,其中每个文件包含一个列组(200-1000 列)?有人可以提供一个使用 python/pandas/pyarrow 存储此类文件的示例吗?

或者,每个 col 组可以存储为不同的逻辑 parquet 文件。在这种情况下,所有文件都有一个object_id索引列,但每个 parquet 文件(对于 col 组)将包含不同的对象子集。任何虽然或建议表示赞赏。

标签: pythonpandasparquetpyarrowapache-arrow

解决方案


Parquet 文件只有一个模式。即使有多个分区,每个分区也将具有相同的架构,这使工具能够像读取单个文件一样读取这些文件。

如果来自 pandas 端的传入数据正在更改,则写入 parquet 文件将不起作用,因为模式与源的模式不同。

要使这项工作成为您的数据管道工作,您至少需要考虑以下几点:

收集所有列及其数据类型和列顺序

格式化数据框以包含具有指定数据类型和列顺序的所有列

写到镶木地板

请参阅下面的代码以获取有关可能失败的更多信息。

df = pd.DataFrame({"Date":{"0":1514764800000,"1":1514851200000,"2":1514937600000,"3":1515024000000,"4":1515110400000,"5":1515196800000,"6":1515283200000,"7":1515369600000},"Day":{"0":1,"1":2,"2":3,"3":4,"4":5,"5":6,"6":7,"7":8},"Year":{"0":2018,"1":2018,"2":2018,"3":2018,"4":2018,"5":2018,"6":2018,"7":2018},"Month":{"0":1,"1":1,"2":1,"3":1,"4":1,"5":1,"6":1,"7":1},"randNumCol":{"0":2,"1":5,"2":4,"3":3,"4":3,"5":5,"6":4,"7":3},"uuid":{"0":"456578af-8953-4cf7-ac27-70309353b72c","1":"df6a30da-619e-4594-a051-4fdb3572eb49","2":"7cfe724a-a827-47b1-a691-c741f4f1101d","3":"f1796ed1-f7ce-4b49-ba64-6aacdca02c0a","4":"827e4aae-1214-4c0f-ac7f-9439e8a577af","5":"08dc3c2b-b75c-4ac6-8a38-0a44007fdeaf","6":"54f4e7bb-6fd8-4913-a2c3-69ebc13dc9a2","7":"eda1dbfe-ad08-4067-b064-bcc689fa0225"},"NEWCOLUMN":{"0":1514764800000,"1":1514851200000,"2":1514937600000,"3":1515024000000,"4":1515110400000,"5":1515196800000,"6":1515283200000,"7":1515369600000}})
table = pa.Table.from_pandas(df)
pq.write_to_dataset(table,root_path='output.parquet',partition_cols=['Year','Month','Day'])
#Read Table OK
pandas_df=pd.read_parquet('output.parquet')
print(pandas_df)

#Second Table Same Exact Columns in the Same order
df = pd.DataFrame({"Date":{"0":1514764800000,"1":1514851200000,"2":1514937600000,"3":1515024000000,"4":1515110400000,"5":1515196800000,"6":1515283200000,"7":1515369600000},"Day":{"0":1,"1":2,"2":3,"3":4,"4":5,"5":6,"6":7,"7":8},"Year":{"0":2018,"1":2018,"2":2018,"3":2018,"4":2018,"5":2018,"6":2018,"7":2018},"Month":{"0":1,"1":1,"2":1,"3":1,"4":1,"5":1,"6":1,"7":1},"randNumCol":{"0":2,"1":5,"2":4,"3":3,"4":3,"5":5,"6":4,"7":3},"uuid":{"0":"456578af-8953-4cf7-ac27-70309353b72c","1":"df6a30da-619e-4594-a051-4fdb3572eb49","2":"7cfe724a-a827-47b1-a691-c741f4f1101d","3":"f1796ed1-f7ce-4b49-ba64-6aacdca02c0a","4":"827e4aae-1214-4c0f-ac7f-9439e8a577af","5":"08dc3c2b-b75c-4ac6-8a38-0a44007fdeaf","6":"54f4e7bb-6fd8-4913-a2c3-69ebc13dc9a2","7":"eda1dbfe-ad08-4067-b064-bcc689fa0225"},"NEWCOLUMN":{"0":1514764800000,"1":1514764800000,"2":1514764800000,"3":1514764800000,"4":1514764800000,"5":1514764800000,"6":1514764800000,"7":1514764800000}})
table = pa.Table.from_pandas(df)
pq.write_to_dataset(table,root_path='output.parquet',partition_cols=['Year','Month','Day'])
#Read Table OK
pandas_df=pd.read_parquet('output.parquet')
print(pandas_df)

#Second Table same exact columns but wrong order ->Fails
df = pd.DataFrame({"NEWCOLUMN":{"0":1514764800000,"1":1514851200000,"2":1514937600000,"3":1515024000000,"4":1515110400000,"5":1515196800000,"6":1515283200000,"7":1515369600000},"Day":{"0":1,"1":2,"2":3,"3":4,"4":5,"5":6,"6":7,"7":8},"Year":{"0":2018,"1":2018,"2":2018,"3":2018,"4":2018,"5":2018,"6":2018,"7":2018},"Month":{"0":1,"1":1,"2":1,"3":1,"4":1,"5":1,"6":1,"7":1},"randNumCol":{"0":2,"1":5,"2":4,"3":3,"4":3,"5":5,"6":4,"7":3},"uuid":{"0":"456578af-8953-4cf7-ac27-70309353b72c","1":"df6a30da-619e-4594-a051-4fdb3572eb49","2":"7cfe724a-a827-47b1-a691-c741f4f1101d","3":"f1796ed1-f7ce-4b49-ba64-6aacdca02c0a","4":"827e4aae-1214-4c0f-ac7f-9439e8a577af","5":"08dc3c2b-b75c-4ac6-8a38-0a44007fdeaf","6":"54f4e7bb-6fd8-4913-a2c3-69ebc13dc9a2","7":"eda1dbfe-ad08-4067-b064-bcc689fa0225"},"Date":{"0":1514764800000,"1":1514764800000,"2":1514764800000,"3":1514764800000,"4":1514764800000,"5":1514764800000,"6":1514764800000,"7":1514764800000}})
table = pa.Table.from_pandas(df)
pq.write_to_dataset(table,root_path='output.parquet',partition_cols=['Year','Month','Day'])
pandas_df=pd.read_parquet('output.parquet')
print(pandas_df)

#Third Table with "NEWCOLUMN" left out ->Fails
df = pd.DataFrame({"Date":{"0":1514764800000,"1":1514851200000,"2":1514937600000,"3":1515024000000,"4":1515110400000,"5":1515196800000,"6":1515283200000,"7":1515369600000},"Day":{"0":1,"1":2,"2":3,"3":4,"4":5,"5":6,"6":7,"7":8},"Year":{"0":2018,"1":2018,"2":2018,"3":2018,"4":2018,"5":2018,"6":2018,"7":2018},"Month":{"0":1,"1":1,"2":1,"3":1,"4":1,"5":1,"6":1,"7":1},"randNumCol":{"0":2,"1":5,"2":4,"3":3,"4":3,"5":5,"6":4,"7":3},"uuid":{"0":"456578af-8953-4cf7-ac27-70309353b72c","1":"df6a30da-619e-4594-a051-4fdb3572eb49","2":"7cfe724a-a827-47b1-a691-c741f4f1101d","3":"f1796ed1-f7ce-4b49-ba64-6aacdca02c0a","4":"827e4aae-1214-4c0f-ac7f-9439e8a577af","5":"08dc3c2b-b75c-4ac6-8a38-0a44007fdeaf","6":"54f4e7bb-6fd8-4913-a2c3-69ebc13dc9a2","7":"eda1dbfe-ad08-4067-b064-bcc689fa0225"}})
table = pa.Table.from_pandas(df)
pq.write_to_dataset(table,root_path='output.parquet',partition_cols=['Year','Month','Day'])
pandas_df=pd.read_parquet('output.parquet')
print(pandas_df)

推荐阅读