首页 > 解决方案 > 为什么选择查询在 Postgres 中很慢?

问题描述

我有一个简单的 Postgres 表。计算总记录的简单查询需要很长时间。我的表中有 750 万条记录,我使用 8 个 vCPU,32 GB 内存机器。数据库在同一台机器上。

编辑:添加查询。

以下查询非常慢:

SELECT * FROM import_csv WHERE processed = False ORDER BY id ASC OFFSET 1 LIMIT 10000

解释的输出

$ explain SELECT * FROM import_csv WHERE processed = False ORDER BY id ASC OFFSET 1 LIMIT 10000

---------------------------------------------------------------------------------------------------------
 Limit  (cost=5.42..49915.17 rows=10000 width=1985)
   ->  Index Scan using import_csv_id_idx on import_csv  (cost=0.43..19144730.02 rows=3835870 width=1985)
         Filter: (NOT processed)
(3 rows)

我的表格如下:

      Column       |      Type      | Collation | Nullable | Default 
-------------------+----------------+-----------+----------+---------
 id                | integer        |           |          | 
 name              | character(500) |           |          | 
 domain            | character(500) |           |          | 
 year_founded      | real           |           |          | 
 industry          | character(500) |           |          | 
 size_range        | character(500) |           |          | 
 locality          | character(500) |           |          | 
 country           | character(500) |           |          | 
 linkedinurl       | character(500) |           |          | 
 employees         | integer        |           |          | 
 processed         | boolean        |           | not null | false
 employee_estimate | integer        |           |          | 
Indexes:
    "import_csv_id_idx" btree (id)
    "processed_idx" btree (processed)

谢谢

编辑3:

# explain analyze SELECT * FROM import_csv WHERE processed = False ORDER BY id ASC OFFSET 1 LIMIT 10000;
                                                                          QUERY PLAN                                                                          
--------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=5.42..49915.33 rows=10000 width=1985) (actual time=8331.070..8355.556 rows=10000 loops=1)
   ->  Index Scan using import_csv_id_idx on import_csv  (cost=0.43..19144790.06 rows=3835870 width=1985) (actual time=8331.067..8354.874 rows=10001 loops=1)
         Filter: (NOT processed)
         Rows Removed by Filter: 3482252
 Planning time: 0.081 ms
 Execution time: 8355.925 ms
(6 rows)

解释(分析,缓冲)

# explain (analyze, buffers) SELECT * FROM import_csv WHERE processed = False ORDER BY id ASC OFFSET 1 LIMIT 10000;


                                                                          QUERY PLAN                                                                          
--------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=5.42..49915.33 rows=10000 width=1985) (actual time=8236.899..8260.941 rows=10000 loops=1)
   Buffers: shared hit=724036 read=2187905 dirtied=17 written=35
   ->  Index Scan using import_csv_id_idx on import_csv  (cost=0.43..19144790.06 rows=3835870 width=1985) (actual time=8236.896..8260.104 rows=10001 loops=1)
         Filter: (NOT processed)
         Rows Removed by Filter: 3482252
         Buffers: shared hit=724036 read=2187905 dirtied=17 written=35
 Planning time: 0.386 ms
 Execution time: 8261.406 ms
(8 rows)

标签: postgresql

解决方案


它很慢,因为它必须processed = False在找到第 10001 行之前挖掘未通过标准的 3482252 行,并且显然所有这些失败的行都随机分散在表周围,导致大量缓慢的 IO。

您需要一个索引 on(processed, id)或 on(id) where processed = false

如果您执行其中的第一个,您可以将索引放在单独处理上,因为它不再独立有用(如果它曾经开始的话)。


推荐阅读