I have a query which is taking 2.5 seconds to run. On checking the query plan, I got to know that postgres is heavily underestimating the number of rows leading to nested loops.
Following is the query

explain analyze 
SELECT 
  reprocessed_videos.video_id AS reprocessed_videos_video_id 
FROM 
  reprocessed_videos 
  JOIN commit_info ON commit_info.id = reprocessed_videos.commit_id 
WHERE 
  commit_info.tag = 'stop_sign_tbc_inertial_fix' 
  AND reprocessed_videos.reprocess_type_id = 28 
  AND reprocessed_videos.classification_crop_type_id = 0 
  AND reprocessed_videos.reprocess_status = 'success';

以下是解释分析的输出.

  Nested Loop  (cost=0.84..22941.18 rows=1120 width=4) (actual time=31.169..2650.181 rows=179524 loops=1)
   ->  Index Scan using commit_info_tag_key on commit_info  (cost=0.28..8.29 rows=1 width=4) (actual time=0.395..0.397 rows=1 loops=1)
         Index Cond: ((tag)::text = 'stop_sign_tbc_inertial_fix'::text)
   ->  Index Scan using ix_reprocessed_videos_commit_id on reprocessed_videos  (cost=0.56..22919.99 rows=1289 width=8) (actual time=30.770..2634.546 rows=179524 loops=1)
         Index Cond: (commit_id = commit_info.id)
         Filter: ((reprocess_type_id = 28) AND (classification_crop_type_id = 0) AND ((reprocess_status)::text = 'success'::text))
         Rows Removed by Filter: 1190
 Planning Time: 0.326 ms
 Execution Time: 2657.724 ms


正如我们所看到的,使用ix_reProcededVIDESSIONS_COMMIT_ID的索引扫描预期为1289行,而实际上是179524行.我一直在努力寻找原因,但无论我做了什么,都没有成功.

以下是我try 过的事情.

  1. 吸尘器,分析所有涉及的表格.(有一点帮助,但不是很大,可能是因为桌子被自动吸尘和分析)
  2. 增加Commit_id列alter table reprocessed_videos alter column commit_id set statistics 1000;的统计计数(略有帮助)
  3. 我读了大约extended statistics篇,但不确定它们在这里是否有用.

以下是每个表中的元组数量

kpis=> SELECT relname, reltuples FROM pg_class where relname in ('reprocessed_videos', 'video_catalog', 'commit_info');
      relname       |   reltuples   
--------------------+---------------
 commit_info        |          1439
 reprocessed_videos | 3.1563756e+07

以下是与表架构相关的一些信息

                                                 Table "public.reprocessed_videos"
           Column            |            Type             | Collation | Nullable |                    Default                     
-----------------------------+-----------------------------+-----------+----------+------------------------------------------------
 id                          | integer                     |           | not null | nextval('reprocessed_videos_id_seq'::regclass)
 video_id                    | integer                     |           |          | 
 reprocess_status            | character varying           |           |          | 
 commit_id                   | integer                     |           |          | 
 reprocess_type_id           | integer                     |           |          | 
 classification_crop_type_id | integer                     |           |          | 
Indexes:
    "reprocessed_videos_pkey" PRIMARY KEY, btree (id)
    "ix_reprocessed_videos_commit_id" btree (commit_id)
    "ix_reprocessed_videos_video_id" btree (video_id)
    "reprocessed_videos_video_commit_reprocess_crop_key" UNIQUE CONSTRAINT, btree (video_id, commit_id, reprocess_type_id, classification_crop_type_id)
Foreign-key constraints:
    "reprocessed_videos_commit_id_fkey" FOREIGN KEY (commit_id) REFERENCES commit_info(id)

                                         Table "public.commit_info"
         Column         |       Type        | Collation | Nullable |                 Default                 
------------------------+-------------------+-----------+----------+-----------------------------------------
 id                     | integer           |           | not null | nextval('commit_info_id_seq'::regclass)
 tag                    | character varying |           |          | 
 commit                 | character varying |           |          | 

Indexes:
    "commit_info_pkey" PRIMARY KEY, btree (id)
    "commit_info_tag_key" UNIQUE CONSTRAINT, btree (tag)

100

以下是我try 过的实验.

  1. 正在禁用索引扫描
 Nested Loop  (cost=734.59..84368.70 rows=1120 width=4) (actual time=274.694..934.965 rows=179524 loops=1)
   ->  Bitmap Heap Scan on commit_info  (cost=4.29..8.30 rows=1 width=4) (actual time=0.441..0.444 rows=1 loops=1)
         Recheck Cond: ((tag)::text = 'stop_sign_tbc_inertial_fix'::text)
         Heap Blocks: exact=1
         ->  Bitmap Index Scan on commit_info_tag_key  (cost=0.00..4.29 rows=1 width=0) (actual time=0.437..0.439 rows=1 loops=1)
               Index Cond: ((tag)::text = 'stop_sign_tbc_inertial_fix'::text)
   ->  Bitmap Heap Scan on reprocessed_videos  (cost=730.30..84347.51 rows=1289 width=8) (actual time=274.250..920.137 rows=179524 loops=1)
         Recheck Cond: (commit_id = commit_info.id)
         Filter: ((reprocess_type_id = 28) AND (classification_crop_type_id = 0) AND ((reprocess_status)::text = 'success'::text))
         Rows Removed by Filter: 1190
         Heap Blocks: exact=5881
         ->  Bitmap Index Scan on ix_reprocessed_videos_commit_id  (cost=0.00..729.98 rows=25256 width=0) (actual time=273.534..273.534 rows=180714 loops=1)
               Index Cond: (commit_id = commit_info.id)
 Planning Time: 0.413 ms
 Execution Time: 941.874 ms

我还为Commit_id列设置了更新的统计信息.我观察到速度大约提高了3倍.

  1. 在try 禁用bitmapscan时,查询执行顺序扫描,运行时间为19秒

推荐答案

嵌套循环是完美的连接策略,因为从commit_info开始只有一行.任何其他加入策略都将失败.

问题是,对reprocessed_videos的指数扫描是否真的太慢了.要进行实验,请在SET enable_indexscan = off;之后再次try 以获得位图索引扫描,并看看这样做是否更好.然后也是SET enable_bitmapscan = off;以获得顺序扫描.我怀疑你目前的计划会赢,但位图索引扫描的机会很大.


如果位图索引扫描更好,您确实应该try 改进估计:

ALTER TABLE reprocessed_videos ALTER commit_id SET STATISTICS 1000;
ANALYZE reprocessed_videos;

您可以try 使用其他值; Select 给您足够好的估计的最低值.

另一件值得try 的事情是扩展统计:

CREATE STATISTICS corr (dependencies)
   ON (reprocess_type_id, classification_crop_type_id, reprocess_status)
   FROM reprocessed_videos;

ANALYZE reprocessed_videos;

也许您甚至不需要所有三列都在其中;玩它吧.


如果位图索引扫描没有提供足够的好处,有一种方法可以加快当前索引扫描的速度:

CLUSTER reprocessed_videos USING ix_reprocessed_videos_commit_id;

这将以索引顺序重写表(并在运行时阻止并发访问,因此要小心!)在那之后,指数扫描可能会快得多.但是,不会保持顺序,所以如果修改了足够多的表,您将不得不偶尔重复CLUSTER.

Postgresql相关问答推荐

如何在postquist中修剪第一/后文件名

在postgres中查找多个表中不同列的计数和总和

哪种数据类型是除法的结果?

为什么Postgres优化器切换到嵌套循环进行连接?

为什么32632投影中的几何图形并不完美?

对 VOLATILE 函数的调用会 destruct SELECT 语句的原子性

Postgres 使用不同元素数据类型的订单数据

ST_Intersects 太慢

如何在typeorm和嵌套js中将运行时变量设置为postgresql

PostgreSQL错误致命:角色 username不存在

PostgreSQL:please specify covering index name是什么意思

从 psycopg2 异常中获取错误消息

判断值是否存在于列中

sql语句错误:column .. does not exist

在这个 Dockerfile 中创建的 Postgres 用户名/密码在哪里?

子查询的自连接

将数据从 MS SQL 迁移到 PostgreSQL?

在 postgres 中创建超级用户

如何减少存储(缩减)我的 RDS 实例?

postgresql 删除级联