我有几个包含工业数据的连接表:
create table v2.tag (
tag_id integer generated always as identity,
tag text not null,
primary key (tag_id),
unique (tag)
);
create table v2.state (
tag_id integer not null,
"timestamp" timestamp without time zone not null,
value float not null,
primary key (tag_id, timestamp),
foreign key (tag_id) references v2.tag (tag_id)
) partition by range (timestamp);
该state
表包含过去 6 个月的约 5000 万行时间序列数据,我需要使用各种查询对其进行基准测试。该表按月分区。
我尝试的查询只是获取每天的数据点数和标签,任何实际的 TSDB 都可以轻松地在如此小的数据集上完成此操作:
SELECT
count(*) as points,date_trunc('day', timestamp) as timestamp,tag.tag
FROM
v2.state
JOIN
v2.tag USING (tag_id)
GROUP BY
timestamp, tag
ORDER BY
timestamp ASC;
问题是,出于某种原因,此查询使数据库占用了近 3GB 的 RAM 并返回了一堆重复项。如下所示:
2024-02-01 00:00:00 | /Heave | 1
2024-02-01 00:00:00 | /Pitch | 1
2024-02-01 00:00:00 | /Roll | 1
2024-02-01 00:00:00 | /Velocity | 1
2024-02-01 00:00:00 | /Heave | 1
...
等等。在同一天,我无法滚动到下一天,它只是在结果中不断重复这些行,而不是像我预期的那样按标签计数。因此,它似乎并没有计算每天/标签的数据点数量,而是为数据库中约 5000 万行中的每一行生成一个重复项。
因此,聚合中有些地方出了问题。我预计此查询将返回大约 12K 行 ( 65*30*6
),但它却返回了数百万行,导致我尝试将其加载到的 Jupyter 笔记本发生 OOM 故障。
我尝试用 来运行它EXPLAIN ANALYZE
,但是由于我是 Postgres 的新手,所以它并没有真正......解释任何事情:
Sort (cost=700769.72..700798.22 rows=11400 width=78) (actual time=80503.260..83825.211 rows=47499969 loops=1)
Sort Key: (date_trunc('day'::text, state."timestamp"))
Sort Method: external merge Disk: 4703296kB
-> Finalize GroupAggregate (cost=697027.86..700001.55 rows=11400 width=78) (actual time=35609.801..64328.719 rows=47
499969 loops=1)
Group Key: state."timestamp", tag.tag
-> Gather Merge (cost=697027.86..699688.05 rows=22800 width=70) (actual time=35609.453..55143.276 rows=4749996
9 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Sort (cost=696027.84..696056.34 rows=11400 width=70) (actual time=34526.070..42018.956 rows=15833323
loops=3)
Sort Key: state."timestamp", tag.tag
Sort Method: external merge Disk: 1414088kB
Worker 0: Sort Method: external merge Disk: 1446832kB
Worker 1: Sort Method: external merge Disk: 1470664kB
-> Partial HashAggregate (cost=695145.67..695259.67 rows=11400 width=70) (actual time=8690.289..20
138.661 rows=15833323 loops=3)
Group Key: state."timestamp", tag.tag
Batches: 1029 Memory Usage: 8241kB Disk Usage: 1694608kB
Worker 0: Batches: 901 Memory Usage: 8241kB Disk Usage: 1727928kB
Worker 1: Batches: 773 Memory Usage: 8241kB Disk Usage: 1748528kB
-> Hash Join (cost=2.28..652834.40 rows=5641502 width=62) (actual time=138.598..4142.702 row
s=15833323 loops=3)
Hash Cond: (state.tag_id = tag.tag_id)
-> Parallel Append (cost=0.00..599769.83 rows=19794743 width=12) (actual time=138.383.
.2665.699 rows=15833323 loops=3)
-> Parallel Seq Scan on state_y2024m04 state_4 (cost=0.00..221214.31 rows=874583
1 width=12) (actual time=39.308..827.302 rows=6996457 loops=3)
-> Parallel Seq Scan on state_y2024m02 state_2 (cost=0.00..172317.34 rows=680943
4 width=12) (actual time=58.866..1102.604 rows=8171318 loops=2)
-> Parallel Seq Scan on state_y2024m03 state_3 (cost=0.00..78305.04 rows=3095204
width=12) (actual time=0.766..694.493 rows=7428501 loops=1)
-> Parallel Seq Scan on state_y2024m05 state_5 (cost=0.00..28879.42 rows=1141442
width=12) (actual time=180.418..416.467 rows=2739461 loops=1)
-> Parallel Seq Scan on state_y2024m01 state_1 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.000..0.001 rows=0 loops=1)
-> Parallel Seq Scan on state_y2024m06 state_6 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.000..0.001 rows=0 loops=1)
-> Parallel Seq Scan on state_y2024m07 state_7 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.000..0.001 rows=0 loops=1)
-> Parallel Seq Scan on state_y2024m08 state_8 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.002..0.002 rows=0 loops=1)
-> Hash (cost=1.57..1.57 rows=57 width=58) (actual time=0.149..0.268 rows=65 loops=3)
Buckets: 1024 Batches: 1 Memory Usage: 14kB
-> Seq Scan on tag (cost=0.00..1.57 rows=57 width=58) (actual time=0.031..0.036
rows=65 loops=3)
Planning Time: 2.447 ms
JIT:
Functions: 96
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 14.487 ms, Inlining 105.515 ms, Optimization 203.723 ms, Emission 143.355 ms, Total 467.081 ms
Execution Time: 86164.911 ms
那么我的查询有什么问题?为什么它没有聚合?