首页 > 解决方案 > 如何从 300GB 文件中提取一列到另一个文件

问题描述

问题是庞大的数据量,我必须使用具有 12GB RAM 的个人笔记本电脑来处理。我尝试了一个 1M 的循环。每一轮都行,并使用 csv.writer。但是 csv.writer 写得像 1M。每两小时排队。那么,还有其他值得尝试的方法吗?

        lines = 10000000
        for i in range(0, 330):
            list_str = []
            with open(file, 'r') as f:
                line_flag = 0
                for _ in range(i*lines):
                    next(f)
                for line in f:
                    line_flag = line_flag + 1
                    data = json.loads(line)['name']
                    if data != former_str:
                        list_str.append(data)
                        former_str = data
                    if line_flag == lines:
                        break
            with open(self.path + 'data_range\\names.csv', 'a', newline='') as writeFile:
                writer = csv.writer(writeFile, delimiter='\n')
                writer.writerow(list_str)
                writeFile.close()

另一个版本

def read_large_file(f):
    block_size = 200000000
    block = []
    for line in f:
        block.append(line[:-1])
        if len(block) == block_size:
            yield block
            block = []

    if block:
        yield block


def split_files():
    with open(write_file, 'r') as f:
        i = 0
        for block in read_large_file(f):
            print(i)
            file_name = write_name + str(i) + '.csv'
            with open(file_name, 'w', newline='') as f_:
                writer = csv.writer(f_, delimiter='\n')
                writer.writerow(block)
            i += 1

这是在它读取一个块并写入之后......我想知道数据传输率如何保持在 0 左右。 在此处输入图像描述

标签: pythonpandascsvbigdatareadline

解决方案


像这样的东西会起作用吗?

本质上使用生成器来避免读取内存中的整个文件,并一次写入一行数据。

import jsonlines  # pip install jsonlines
from typing import Generator

def gen_lines(file_path: str, col_name: str) -> Generator[str]:
    with jsonline.open(file_path) as f:
        for obj in f:
            yield obj[col_name]


# Here you can also change to writing a jsonline again
with open(output_file, "w") as out:
     for item in gen_lines(your_file_path, col_name_to_extract):
         out.write(f"{item}\n")


推荐阅读