首页 > 解决方案 > 如何使用 SQLAlchemy 加快批量插入 MySQL 的速度?

问题描述

我刚刚看到使用 SQLAlchemy 批量插入 MySQL / MariaDB 数据库很慢(即使使用session.bulk_save_objects(objects))。我怎样才能更快?

MVCE

from sqlalchemy import Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.sql import text
import click
import json
import sqlalchemy
import time
import uuid

Base = declarative_base()


class KeyValue(Base):
    __tablename__ = "KeyValue"
    key = Column(String(36), primary_key=True)
    value = Column(Text)

    def __repr__(self):
        return f"KeyValue(key='{self.key}', value='{self.value}')"


def run_benchmark(SQLALCHEMY_DATABASE_URI, n=1000, benchmark_type='orm-bulk'):
    engine = sqlalchemy.create_engine(SQLALCHEMY_DATABASE_URI)
    connection = engine.connect()

    Base.metadata.create_all(engine)

    Session = sessionmaker(bind=engine)
    session = Session()

    keys = [str(uuid.uuid4()) for i in range(n)]
    values = [json.dumps([str(uuid.uuid4()) for _ in range(100)]) for i in range(n)]
    if benchmark_type == 'orm-bulk':
        benchmark_orm_bulk_insert(session, keys, values)
    elif benchmark_type == 'print':
        print_query(keys, values)



def benchmark_orm_bulk_insert(session, keys, values):
    t0 = time.time()
    objects = [
        KeyValue(key=key, value=value)
        for key, value in zip(keys, values)
    ]
    session.bulk_save_objects(objects)
    session.commit()
    t1 = time.time()
    print(f"Inserted {len(keys)} entries in {t1 - t0:0.2f}s with ORM-Bulk "
          f"({len(keys)/(t1 - t0):0.2f} inserts/s).")


def print_query(keys, values):
    print("INSERT INTO KeyValue (`key`, `value`) VALUES")
    for i, (key, value) in enumerate(zip(keys, values)):
        if i == 0:
            print(f"({json.dumps(key)}, {json.dumps(value)})")
        else:
            print(f", ({json.dumps(key)}, {json.dumps(value)})")
    print(";")


@click.command()
@click.option("-n", "n", required=True, type=int)
@click.option(
    "--mode",
    "mode",
    required=True,
    type=click.Choice(["orm-bulk", "print"]),
)
def entry_point(n, mode):
    run_benchmark("mysql+pymysql://root:password@localhost/benchmark", n, mode)


if __name__ == "__main__":
    entry_point()

这给出了:

$ python3 benchmark.py -n 10_000 --mode orm-bulk           
Inserted 10000 entries in 3.28s with ORM-Bulk (3048.23 inserts/s).

# Using extended INSERT statements
$ python3 benchmark.py -n 10_000 --mode print > inserts.txt
$ time mysql benchmark < inserts.txt

real    2,93s
user    0,27s
sys 0,03s

所以 SQLAlchemy 批量插入每秒获得 3048 次插入,而原始 SQL 查询有 3412 次插入。

相关问题,但不是主要问题

请注意,这两个数字都与 MySQL 的高速插入中提到的 313,000 次插入/秒相去甚远。和

LOAD DATA LOCAL INFILE 'data.csv' INTO TABLE KeyValue FIELDS TERMINATED BY ',' ENCLOSED BY '"' IGNORE 1 LINES;

我得到了 2.22 秒的执行时间(4500 次插入/秒),这仍然少得多。通过将 quotechar 从更改"'(减少转义很多),我得到了 1.55 秒(6451 次插入/秒)。

更改bulk_insert_buffer_size为 256MB 也没有帮助(howto

将 MySQL 存储引擎从 InnoDB 更改为 MyISAM 将速度更改为 0.32 秒(31250 次插入/秒)!

尝试其他存储引擎,每个运行 3 次:

标签: mysqlpython-3.xsqlalchemy

解决方案


推荐阅读