首页 > 解决方案 > 防止我的 RAM 内存达到 100%

问题描述

我有一个非常简单的 python 脚本,它读取 CSV 文件并根据时间戳对行进行排序。但是,该文件足够大(16 GB),其读取完全使用 RAM 内存。当它达到 100%(即 64 GB RAM 内存)时,我的系统完全死机,我不得不重新启动计算机。

这是代码:

import pandas as pd
from time import time

filename = 'AKER_OB.csv'

start_ = time()
file_ = pd.read_csv(filename)
end_ = time()
duration = end_ - start_
print("The duration to load that file : {}".format(duration))

file_.to_datetime(df['TimeStamps'], format="%Y-%m-%d %H:%M:%S").sort_values()

负责人AKER_OB.csv

TimeStamp,Bid1,BidSize1,Bid2,BidSize2,Bid3,BidSize3,Bid4,BidSize4,Bid5,BidSize5,Bid6,BidSize6,Bid7,BidSize7,Bid8,BidSize8,Bid9,BidSize9,Bid10,BidSize10,Bid11,BidSize11,Bid12,BidSize12,Bid13,BidSize13,Bid14,BidSize14,Bid15,BidSize15,Bid16,BidSize16,Bid17,BidSize17,Bid18,BidSize18,Bid19,BidSize19,Bid20,BidSize20,Ask1,AskSize1,Ask2,AskSize2,Ask3,AskSize3,Ask4,AskSize4,Ask5,AskSize5,Ask6,AskSize6,Ask7,AskSize7,Ask8,AskSize8,Ask9,AskSize9,Ask10,AskSize10,Ask11,AskSize11,Ask12,AskSize12,Ask13,AskSize13,Ask14,AskSize14,Ask15,AskSize15,Ask16,AskSize16,Ask17,AskSize17,Ask18,AskSize18,Ask19,AskSize19,Ask20,AskSize20
2016-10-08 00:00:00,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:01,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:02,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:03,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:04,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:05,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:06,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:07,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:08,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

解决此问题的正确方法是什么?将不胜感激带有代码片段的完整答案。

标签: pythonpandasmemoryram

解决方案


本质上,您必须实现自己的内存不足排序。

  1. 使用Pandas CSV chunker将文件分成两部分或更多部分,对每一部分进行排序(一次一个!),将其保存到单独的 CSV 文件中,然后使用del.

  2. 通过使用 CSV 分块器打开所有已保存的预排序文件、根据需要组合来自块的行并将已排序的行附加到输出文件来合并已排序的文件。


推荐阅读