Eu tenho uma consulta:
WITH route_ids_filtered_by_shipments AS (
SELECT DISTINCT
rts.route_id
FROM
route_to_shipment rts
JOIN
shipment s
ON rts.shipment_id = s.shipment_id
WHERE
s.store_sender_id = ANY('{"a2342659-5f2f-11eb-85a3-1c34dae33151","7955ab25-0511-11ee-885e-08c0eb32014b","319ce173-2614-11ee-b10a-08c0eb31fffb","4bdddeb3-5ec9-11ee-b10a-08c0eb31fffb","8e6054c5-6db3-11ea-9786-0050560307be","485dc39c-debc-11ed-885e-08c0eb32014b","217d0f7b-78de-11ea-a214-0050560307be","a5a8a21a-9b9a-11ec-b0fc-08c0eb31fffb","79e7d5be-ef8b-11eb-a0ee-ec0d9a21b021","3f35d68a-1212-11ec-85ad-1c34dae33151","087bcf22-5f30-11eb-85a3-1c34dae33151","c065e1c8-a679-11eb-85a9-1c34dae33151"}'::uuid[])
)
SELECT
r.acceptance_status
, count(*) count
FROM
route r
JOIN
route_ids_filtered_by_shipments rifs
ON r.route_id = rifs.route_id
WHERE
r.acceptance_status <> 'ERRORED'::route_acceptance_status
GROUP BY
r.acceptance_status;
Seu plano de execução (obtido via EXPLAIN (ANALYZE, BUFFERS, SETTINGS):
HashAggregate (cost=579359.05..579359.09 rows=4 width=12) (actual time=6233.281..6669.596 rows=3 loops=1)
Group Key: r.acceptance_status
Batches: 1 Memory Usage: 24kB
Buffers: shared hit=14075979 read=573570
I/O Timings: shared/local read=19689.039
-> Hash Join (cost=564249.11..578426.89 rows=186432 width=4) (actual time=6064.176..6658.862 rows=69460 loops=1)
Hash Cond: (r.route_id = rts.route_id)
Buffers: shared hit=14075979 read=573570
I/O Timings: shared/local read=19689.039
-> Seq Scan on route r (cost=0.00..13526.16 rows=248230 width=20) (actual time=0.015..112.580 rows=248244 loops=1)
Filter: (acceptance_status <> 'ERRORED'::route_acceptance_status)
Rows Removed by Filter: 7879
Buffers: shared hit=5112 read=3492
I/O Timings: shared/local read=35.687
-> Hash (cost=561844.75..561844.75 rows=192349 width=16) (actual time=6063.413..6499.725 rows=69460 loops=1)
Buckets: 262144 Batches: 1 Memory Usage: 5304kB
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> HashAggregate (cost=557997.77..559921.26 rows=192349 width=16) (actual time=6038.518..6487.332 rows=69460 loops=1)
Group Key: rts.route_id
Batches: 1 Memory Usage: 10257kB
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> Gather (cost=1001.02..555707.18 rows=916234 width=16) (actual time=0.976..6341.587 rows=888024 loops=1)
Workers Planned: 7
Workers Launched: 7
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> Nested Loop (cost=1.02..463083.78 rows=130891 width=16) (actual time=1.576..5990.903 rows=111003 loops=8)
Buffers: shared hit=14070867 read=570078
I/O Timings: shared/local read=19653.352
-> Parallel Index Only Scan using route_to_shipment_pkey on route_to_shipment rts (cost=0.56..78746.01 rows=517565 width=32) (actual time=0.050..733.728 rows=452894 loops=8)
Heap Fetches: 401042
Buffers: shared hit=94576 read=38851
I/O Timings: shared/local read=2255.435
-> Index Scan using shipment_pkey on shipment s (cost=0.46..0.74 rows=1 width=16) (actual time=0.011..0.011 rows=0 loops=3623151)
Index Cond: (shipment_id = rts.shipment_id)
" Filter: (store_sender_id = ANY ('{a2342659-5f2f-11eb-85a3-1c34dae33151,7955ab25-0511-11ee-885e-08c0eb32014b,319ce173-2614-11ee-b10a-08c0eb31fffb,4bdddeb3-5ec9-11ee-b10a-08c0eb31fffb,8e6054c5-6db3-11ea-9786-0050560307be,485dc39c-debc-11ed-885e-08c0eb32014b,217d0f7b-78de-11ea-a214-0050560307be,a5a8a21a-9b9a-11ec-b0fc-08c0eb31fffb,79e7d5be-ef8b-11eb-a0ee-ec0d9a21b021,3f35d68a-1212-11ec-85ad-1c34dae33151,087bcf22-5f30-11eb-85a3-1c34dae33151,c065e1c8-a679-11eb-85a9-1c34dae33151}'::uuid[]))"
Rows Removed by Filter: 1
Buffers: shared hit=13976291 read=531227
I/O Timings: shared/local read=17397.917
"Settings: effective_cache_size = '256GB', effective_io_concurrency = '250', max_parallel_workers = '24', max_parallel_workers_per_gather = '8', random_page_cost = '1', seq_page_cost = '1.2', work_mem = '128MB'"
Planning:
Buffers: shared hit=16
Planning Time: 0.409 ms
Execution Time: 6670.976 ms
Minha tarefa é fazer com que a consulta seja executada em pelo menos 1 segundo. Posso observar no plano (com base no meu conhecimento atual sobre otimização de consultas PG) que alguns nós têm um grande número de buscas de heap e isso pode ser curado com VACCUM em uma tabela. O que estou tentando compreender:
- Por que o PG escolheu
ON
o predicado de junçãorts.shipment_id = shipment_id
como base para a construção de um conjunto de linhas e realizou a filtragemstore_sender_id
desse conjunto se houver um índice separado nashipment.store_sender_id
coluna que seja altamente seletivo. No meu entendimento, encontrar um número relativamente pequeno de linhas correspondentesstore_sender_id
e filtrarrts.shipment_id = shipment_id
seria muito mais rápido. Ou pode haver união de varreduras de índice de bitmap (via BitmapAnd). -
Varredura de índice usandoshipping_pkey em remessas (custo=0,46..0,74 linhas=1 largura=16) (tempo real=0,011..0.011 linhas=0 loops=3623151)
Se eu multiplicar o tempo total real pelo loops
contador para obter o tempo real, chego perto de 40 segundos, quando a consulta é concluída em 7 segundos. Como isso poderia ser???
A verificação do índice está demorando muito porque
é repetido com muita frequência
não há colunas de índice suficientes, então o PostgreSQL precisa buscar as linhas da tabela.
Experimente este índice:
Frequente
VACUUM
é necessário para manter a varredura somente de índice eficiente.