我有一个疑问:
WITH route_ids_filtered_by_shipments AS (
SELECT DISTINCT
rts.route_id
FROM
route_to_shipment rts
JOIN
shipment s
ON rts.shipment_id = s.shipment_id
WHERE
s.store_sender_id = ANY('{"a2342659-5f2f-11eb-85a3-1c34dae33151","7955ab25-0511-11ee-885e-08c0eb32014b","319ce173-2614-11ee-b10a-08c0eb31fffb","4bdddeb3-5ec9-11ee-b10a-08c0eb31fffb","8e6054c5-6db3-11ea-9786-0050560307be","485dc39c-debc-11ed-885e-08c0eb32014b","217d0f7b-78de-11ea-a214-0050560307be","a5a8a21a-9b9a-11ec-b0fc-08c0eb31fffb","79e7d5be-ef8b-11eb-a0ee-ec0d9a21b021","3f35d68a-1212-11ec-85ad-1c34dae33151","087bcf22-5f30-11eb-85a3-1c34dae33151","c065e1c8-a679-11eb-85a9-1c34dae33151"}'::uuid[])
)
SELECT
r.acceptance_status
, count(*) count
FROM
route r
JOIN
route_ids_filtered_by_shipments rifs
ON r.route_id = rifs.route_id
WHERE
r.acceptance_status <> 'ERRORED'::route_acceptance_status
GROUP BY
r.acceptance_status;
它的执行计划(通过 EXPLAIN (ANALYZE, BUFFERS, SETTINGS) 获得:
HashAggregate (cost=579359.05..579359.09 rows=4 width=12) (actual time=6233.281..6669.596 rows=3 loops=1)
Group Key: r.acceptance_status
Batches: 1 Memory Usage: 24kB
Buffers: shared hit=14075979 read=573570
I/O Timings: shared/local read=19689.039
-> Hash Join (cost=564249.11..578426.89 rows=186432 width=4) (actual time=6064.176..6658.862 rows=69460 loops=1)
Hash Cond: (r.route_id = rts.route_id)
Buffers: shared hit=14075979 read=573570
I/O Timings: shared/local read=19689.039
-> Seq Scan on route r (cost=0.00..13526.16 rows=248230 width=20) (actual time=0.015..112.580 rows=248244 loops=1)
Filter: (acceptance_status <> 'ERRORED'::route_acceptance_status)
Rows Removed by Filter: 7879
Buffers: shared hit=5112 read=3492
I/O Timings: shared/local read=35.687
-> Hash (cost=561844.75..561844.75 rows=192349 width=16) (actual time=6063.413..6499.725 rows=69460 loops=1)
Buckets: 262144 Batches: 1 Memory Usage: 5304kB
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> HashAggregate (cost=557997.77..559921.26 rows=192349 width=16) (actual time=6038.518..6487.332 rows=69460 loops=1)
Group Key: rts.route_id
Batches: 1 Memory Usage: 10257kB
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> Gather (cost=1001.02..555707.18 rows=916234 width=16) (actual time=0.976..6341.587 rows=888024 loops=1)
Workers Planned: 7
Workers Launched: 7
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> Nested Loop (cost=1.02..463083.78 rows=130891 width=16) (actual time=1.576..5990.903 rows=111003 loops=8)
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> Parallel Index Only Scan using route_to_shipment_pkey on route_to_shipment rts (cost=0.56..78746.01 rows=517565 width=32) (actual time=0.050..733.728 rows=452894 loops=8)
Heap Fetches: 401042
Buffers: shared hit=94576 read=38851
I/O Timings: shared/local read=2255.435
-> Index Scan using shipment_pkey on shipment s (cost=0.46..0.74 rows=1 width=16) (actual time=0.011..0.011 rows=0 loops=3623151)
Index Cond: (shipment_id = rts.shipment_id)
" Filter: (store_sender_id = ANY ('{a2342659-5f2f-11eb-85a3-1c34dae33151,7955ab25-0511-11ee-885e-08c0eb32014b,319ce173-2614-11ee-b10a-08c0eb31fffb,4bdddeb3-5ec9-11ee-b10a-08c0eb31fffb,8e6054c5-6db3-11ea-9786-0050560307be,485dc39c-debc-11ed-885e-08c0eb32014b,217d0f7b-78de-11ea-a214-0050560307be,a5a8a21a-9b9a-11ec-b0fc-08c0eb31fffb,79e7d5be-ef8b-11eb-a0ee-ec0d9a21b021,3f35d68a-1212-11ec-85ad-1c34dae33151,087bcf22-5f30-11eb-85a3-1c34dae33151,c065e1c8-a679-11eb-85a9-1c34dae33151}'::uuid[]))"
Rows Removed by Filter: 1
Buffers: shared hit=13976291 read=531227
I/O Timings: shared/local read=17397.917
"Settings: effective_cache_size = '256GB', effective_io_concurrency = '250', max_parallel_workers = '24', max_parallel_workers_per_gather = '8', random_page_cost = '1', seq_page_cost = '1.2', work_mem = '128MB'"
Planning:
Buffers: shared hit=16
Planning Time: 0.409 ms
Execution Time: 6670.976 ms
我的任务是至少在 1 秒内执行查询。我可以在计划中观察到(基于我目前关于 PG 查询优化的知识),某些节点具有大量堆获取,并且可以使用表上的 VACCUM 来修复它。我想理解的是:
- 为什么 PG 选择 join 的
ON
谓词rts.shipment_id = shipment_id
作为构建行集的基础,并且store_sender_id
如果列上有一个shipment.store_sender_id
具有高度选择性的单独索引,则对该集执行过滤。根据我的理解,找到相对较少的行匹配store_sender_id
和过滤rts.shipment_id = shipment_id
会更快。或者可能存在位图索引扫描的并集(通过 BitmapAnd)。 -
使用shipment_pkey对发货s进行索引扫描(成本=0.46..0.74行=1宽度=16)(实际时间=0.011..0.011行=0循环=3623151)
如果我将实际总时间乘以loops
计数器来获得实际时间,则当查询在 7 秒内完成时,我会接近 40 秒。怎么会这样???
索引扫描需要很长时间,因为
它经常重复
没有足够的索引列,因此 PostgreSQL 必须获取表行。
试试这个索引:
需要频繁地
VACUUM
保持仅索引扫描的效率。