我们有 2 个应用程序,每个应用程序约有 10 个实例。所有这些实例都连接到同一个 Postgres 数据库。
应用程序使用客户端池(一些应用 ORM 在本地提供连接池)
我们计划安装 PgBouncer (一个为所有)。
- 这对于演出来说是一个好的计划吗?
- 安装 PgBouncer 时可以删除应用程序连接池吗?
我们有 2 个应用程序,每个应用程序约有 10 个实例。所有这些实例都连接到同一个 Postgres 数据库。
应用程序使用客户端池(一些应用 ORM 在本地提供连接池)
我们计划安装 PgBouncer (一个为所有)。
删除列操作是否可扩展?
以前,我们在更新大量记录时遇到问题,因为它占用了更多空间。
删除列操作是否就是这种情况?
表具有以下结构:
查看两周内的统计数据,看到许多更新:
d
字段(总时间:从应用程序测量的 25 分钟)d
由于 Postgres 遵循 MVCC 模式,在更新时重写行,因此将表结构更改为单独的表是否有趣?
这是一个带有其计划的请求。请求目标是列出一个表结构:索引、外键、列。
这个请求在 postgres 11 (50ms) 上很快,在 postgres 12 (500ms) 上很慢。对于这个请求,pg12 怎么可能比 pg11 慢 x10 倍?
请求代码:
EXPLAIN (ANALYZE, BUFFERS)
SELECT
"tableConstraints".constraint_name AS "constraintName",
"tableConstraints".table_name AS "tableName",
"tableConstraints".constraint_type AS "columnType",
"keyColumnUsage".column_name AS "columnName",
"constraintColumnUsage".table_name AS "foreignTableName",
"constraintColumnUsage".column_name AS "foreignColumnName",
json_agg("uidx"."uniqueIndexes") filter (where "uidx"."uniqueIndexes" is not null) AS "uniqueIndexes"
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS "tableConstraints"
JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS "keyColumnUsage"
ON "tableConstraints".constraint_name = "keyColumnUsage".constraint_name
JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE AS "constraintColumnUsage"
ON "constraintColumnUsage".constraint_name = "tableConstraints".constraint_name
FULL OUTER JOIN (
-- Get the index name, table name and list of columns of the unique indexes of a table
SELECT
pg_index.indexrelid::regclass AS "indexName",
"pgClass1".relname AS "tableName",
json_agg(DISTINCT pg_attribute.attname) AS "uniqueIndexes"
FROM
pg_class AS "pgClass1",
pg_class AS "pgClass2",
pg_index,
pg_attribute
WHERE "pgClass1".relname = 'projects'
AND "pgClass1".oid = pg_index.indrelid
AND "pgClass2".oid = pg_index.indexrelid
AND pg_attribute.attrelid = "pgClass1".oid
AND pg_attribute.attnum = ANY(pg_index.indkey)
AND not pg_index.indisprimary
AND pg_index.indisunique
AND "pgClass1".relkind = 'r'
AND not "pgClass1".relname like 'pg%'
GROUP BY
"tableName",
"indexName"
) AS "uidx"
ON "uidx"."tableName" = "tableConstraints".table_name
WHERE "uidx"."tableName" = 'projects'
OR "tableConstraints".table_name = 'projects'
GROUP BY
"constraintName",
"tableConstraints".table_name,
"columnType",
"columnName",
"foreignTableName",
"foreignColumnName"
这是计划的开始,对于消息来说太长了:
GroupAggregate (cost=380.04..380.05 rows=1 width=384) (actual time=194.087..194.124 rows=4 loops=1)
" Group Key: ""*SELECT* 1"".constraint_name, ""*SELECT* 1"".table_name, ""*SELECT* 1"".constraint_type, ((a.attname)::information_schema.sql_identifier), ((""*SELECT* 1_1"".relname)::information_schema.sql_identifier), ((""*SELECT* 1_1"".attname)::information_schema.sql_identifier)"
Buffers: shared hit=41399 read=36
I/O Timings: read=0.270
-> Sort (cost=380.04..380.05 rows=1 width=384) (actual time=194.072..194.103 rows=12 loops=1)
" Sort Key: ""*SELECT* 1"".constraint_name, ""*SELECT* 1"".table_name, ""*SELECT* 1"".constraint_type, ((a.attname)::information_schema.sql_identifier), ((""*SELECT* 1_1"".relname)::information_schema.sql_identifier), ((""*SELECT* 1_1"".attname)::information_schema.sql_identifier)"
Sort Method: quicksort Memory: 31kB
Buffers: shared hit=41399 read=36
I/O Timings: read=0.270
-> Hash Full Join (cost=140.09..380.04 rows=1 width=384) (actual time=44.129..194.040 rows=12 loops=1)
" Hash Cond: ((""*SELECT* 1"".table_name)::name = uidx.""tableName"")"
" Filter: ((uidx.""tableName"" = 'projects'::name) OR ((""*SELECT* 1"".table_name)::name = 'projects'::name))"
Rows Removed by Filter: 110
Buffers: shared hit=41393 read=36
I/O Timings: read=0.270
-> Nested Loop (cost=129.73..369.69 rows=1 width=352) (actual time=5.651..193.250 rows=114 loops=1)
" Join Filter: (c.conname = (""*SELECT* 1"".constraint_name)::name)"
Rows Removed by Join Filter: 35454
Buffers: shared hit=41310 read=36
I/O Timings: read=0.270
-> Nested Loop (cost=102.68..195.28 rows=1 width=320) (actual time=3.427..7.064 rows=114 loops=1)
Buffers: shared hit=2369 read=17
I/O Timings: read=0.138
-> Hash Join (cost=102.62..194.97 rows=4 width=296) (actual time=3.411..6.310 rows=114 loops=1)
" Hash Cond: (c.conname = ""*SELECT* 1_1"".conname)"
Buffers: shared hit=2027 read=17
I/O Timings: read=0.138
-> ProjectSet (cost=24.55..56.88 rows=16000 width=341) (actual time=0.840..3.580 rows=110 loops=1)
Buffers: shared hit=290 read=2
I/O Timings: read=0.030
-> Nested Loop (cost=24.55..32.06 rows=16 width=95) (actual time=0.391..1.097 rows=108 loops=1)
Buffers: shared hit=253
-> Hash Join (cost=24.53..31.18 rows=16 width=99) (actual time=0.378..0.657 rows=108 loops=1)
Hash Cond: (r.relnamespace = nr.oid)
Buffers: shared hit=36
-> Hash Join (cost=23.49..30.12 rows=24 width=103) (actual time=0.239..0.452 rows=108 loops=1)
Hash Cond: (c.conrelid = r.oid)
Buffers: shared hit=27
-> Seq Scan on pg_constraint c (cost=0.00..6.55 rows=141 width=95) (actual time=0.012..0.114 rows=108 loops=1)
" Filter: (contype = ANY ('{p,u,f}'::""char""[]))"
Rows Removed by Filter: 10
Buffers: shared hit=6
-> Hash (cost=23.12..23.12 rows=105 width=12) (actual time=0.205..0.206 rows=108 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 13kB
Buffers: shared hit=21
-> Seq Scan on pg_class r (cost=0.00..23.12 rows=105 width=12) (actual time=0.008..0.185 rows=108 loops=1)
" Filter: (relkind = ANY ('{r,p}'::""char""[]))"
Rows Removed by Filter: 512
Buffers: shared hit=21
-> Hash (cost=1.02..1.02 rows=4 width=4) (actual time=0.123..0.124 rows=5 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
Buffers: shared hit=9
-> Seq Scan on pg_namespace nr (cost=0.00..1.02 rows=4 width=4) (actual time=0.047..0.104 rows=5 loops=1)
Filter: (NOT pg_is_other_temp_schema(oid))
Rows Removed by Filter: 2
Buffers: shared hit=9
-> Index Only Scan using pg_namespace_oid_index on pg_namespace nc (cost=0.03..0.06 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=108)
Index Cond: (oid = c.connamespace)
Heap Fetches: 108
Buffers: shared hit=217
-> Hash (cost=78.06..78.06 rows=4 width=192) (actual time=2.550..2.565 rows=124 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 36kB
Buffers: shared hit=1737 read=15
I/O Timings: read=0.108
-> Append (cost=34.79..78.06 rows=4 width=192) (actual time=0.522..2.503 rows=124 loops=1)
Buffers: shared hit=1737 read=15
I/O Timings: read=0.108
" -> Subquery Scan on ""*SELECT* 1_1"" (cost=34.79..34.79 rows=1 width=192) (actual time=0.521..0.538 rows=14 loops=1)"
Buffers: shared hit=281 read=10
I/O Timings: read=0.071
-> Unique (cost=34.79..34.79 rows=1 width=324) (actual time=0.520..0.534 rows=14 loops=1)
Buffers: shared hit=281 read=10
I/O Timings: read=0.071
-> Sort (cost=34.79..34.79 rows=1 width=324) (actual time=0.519..0.526 rows=28 loops=1)
" Sort Key: nr_1.nspname, r_1.relname, r_1.relowner, a_1.attname, nc_1.nspname, c_1.conname"
Sort Method: quicksort Memory: 39kB
Buffers: shared hit=281 read=10
I/O Timings: read=0.071
-> Nested Loop (cost=0.20..34.78 rows=1 width=324) (actual time=0.178..0.489 rows=28 loops=1)
Join Filter: (c_1.connamespace = nc_1.oid)
Rows Removed by Join Filter: 140
Buffers: shared hit=281 read=10
I/O Timings: read=0.071
-> Nested Loop (cost=0.20..33.74 rows=1 width=264) (actual time=0.173..0.431 rows=28 loops=1)
Buffers: shared hit=253 read=10
I/O Timings: read=0.071
-> Nested Loop (cost=0.17..33.49 rows=1 width=204) (actual time=0.163..0.394 rows=28 loops=1)
Buffers: shared hit=197 read=10
I/O Timings: read=0.071
-> Nested Loop (cost=0.11..33.42 rows=1 width=140) (actual time=0.154..0.336 rows=28 loops=1)
Buffers: shared hit=113 read=10
I/O Timings: read=0.071
-> Nested Loop (cost=0.06..30.91 rows=1 width=76) (actual time=0.103..0.216 rows=28 loops=1)
Buffers: shared hit=27 read=9
I/O Timings: read=0.061
-> Seq Scan on pg_constraint c_1 (cost=0.00..6.51 rows=6 width=72) (actual time=0.008..0.045 rows=10 loops=1)
" Filter: (contype = 'c'::""char"")"
Rows Removed by Filter: 108
Buffers: shared hit=6
-> Index Scan using pg_depend_depender_index on pg_depend d (cost=0.06..4.06 rows=1 width=12) (actual time=0.014..0.016 rows=3 loops=10)
Index Cond: ((classid = '2606'::oid) AND (objid = c_1.oid))
Filter: (refclassid = '1259'::oid)
Rows Removed by Filter: 0
Buffers: shared hit=21 read=9
I/O Timings: read=0.061
-> Index Scan using pg_attribute_relid_attnum_index on pg_attribute a_1 (cost=0.06..2.51 rows=1 width=70) (actual time=0.004..0.004 rows=1 loops=28)
Index Cond: ((attrelid = d.refobjid) AND (attnum = d.refobjsubid))
Filter: (NOT attisdropped)
Buffers: shared hit=86 read=1
I/O Timings: read=0.010
-> Index Scan using pg_class_oid_index on pg_class r_1 (cost=0.06..0.07 rows=1 width=76) (actual time=0.002..0.002 rows=1 loops=28)
Index Cond: (oid = a_1.attrelid)
" Filter: ((relkind = ANY ('{r,p}'::""char""[])) AND pg_has_role(relowner, 'USAGE'::text))"
Buffers: shared hit=84
-> Index Scan using pg_namespace_oid_index on pg_namespace nr_1 (cost=0.03..0.20 rows=1 width=68) (actual time=0.001..0.001 rows=1 loops=28)
Index Cond: (oid = r_1.relnamespace)
Buffers: shared hit=56
-> Seq Scan on pg_namespace nc_1 (cost=0.00..1.02 rows=6 width=68) (actual time=0.000..0.001 rows=6 loops=28)
Buffers: shared hit=28
" -> Subquery Scan on ""*SELECT* 2_1"" (cost=23.66..43.26 rows=3 width=192) (actual time=0.315..1.950 rows=110 loops=1)"
Buffers: shared hit=1456 read=5
I/O Timings: read=0.038
-> Nested Loop (cost=23.66..43.25 rows=3 width=324) (actual time=0.314..1.935 rows=110 loops=1)
Buffers: shared hit=1456 read=5
I/O Timings: read=0.038
-> Nested Loop (cost=23.63..43.06 rows=3 width=196) (actual time=0.303..1.819 rows=110 loops=1)
Buffers: shared hit=1235 read=5
I/O Timings: read=0.038
-> Nested Loop (cost=23.61..42.59 rows=3 width=200) (actual time=0.291..1.628 rows=110 loops=1)
Join Filter: (r_2.oid = a_2.attrelid)
Buffers: shared hit=1014 read=5
I/O Timings: read=0.038
-> Hash Join (cost=23.55..30.18 rows=8 width=195) (actual time=0.249..0.318 rows=108 loops=1)
" Hash Cond: (CASE c_2.contype WHEN 'f'::""char"" THEN c_2.confrelid ELSE c_2.conrelid END = r_2.oid)"
Buffers: shared hit=46
-> Seq Scan on pg_constraint c_2 (cost=0.00..6.55 rows=141 width=123) (actual time=0.004..0.031 rows=108 loops=1)
" Filter: (contype = ANY ('{p,u,f}'::""char""[]))"
Rows Removed by Filter: 10
Buffers: shared hit=6
-> Hash (cost=23.43..23.43 rows=35 width=72) (actual time=0.224..0.225 rows=38 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
Buffers: shared hit=40
-> Seq Scan on pg_class r_2 (cost=0.00..23.43 rows=35 width=72) (actual time=0.072..0.214 rows=38 loops=1)
" Filter: ((relkind = ANY ('{r,p}'::""char""[])) AND pg_has_role(relowner, 'USAGE'::text))"
Rows Removed by Filter: 582
Buffers: shared hit=40
-> Index Scan using pg_attribute_relid_attnum_index on pg_attribute a_2 (cost=0.06..1.55 rows=1 width=70) (actual time=0.005..0.012 rows=1 loops=108)
" Index Cond: (attrelid = CASE c_2.contype WHEN 'f'::""char"" THEN c_2.confrelid ELSE c_2.conrelid END)"
" Filter: ((NOT attisdropped) AND (attnum = ANY (CASE c_2.contype WHEN 'f'::""char"" THEN c_2.confkey ELSE c_2.conkey END)))"
Rows Removed by Filter: 25
Buffers: shared hit=968 read=5
I/O Timings: read=0.038
-> Index Only Scan using pg_namespace_oid_index on pg_namespace nr_2 (cost=0.03..0.15 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=110)
Index Cond: (oid = r_2.relnamespace)
Heap Fetches: 110
Buffers: shared hit=221
-> Index Only Scan using pg_namespace_oid_index on pg_namespace nc_2 (cost=0.03..0.06 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=110)
Index Cond: (oid = c_2.connamespace)
Heap Fetches: 110
Buffers: shared hit=221
-> Index Scan using pg_attribute_relid_attnum_index on pg_attribute a (cost=0.06..0.08 rows=1 width=70) (actual time=0.005..0.005 rows=1 loops=114)
Index Cond: ((attrelid = r.oid) AND (attnum = ((information_schema._pg_expandarray(c.conkey))).x))
" Filter: ((NOT attisdropped) AND (pg_has_role(r.relowner, 'USAGE'::text) OR has_column_privilege(r.oid, attnum, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))"
Buffers: shared hit=342
-> Append (cost=27.05..173.97 rows=124 width=160) (actual time=0.072..1.605 rows=312 loops=114)
Buffers: shared hit=38941 read=19
I/O Timings: read=0.132
" -> Subquery Scan on ""*SELECT* 1"" (cost=27.05..35.65 rows=12 width=160) (actual time=0.072..0.317 rows=116 loops=114)"
Buffers: shared hit=28556 read=8
I/O Timings: read=0.062
-> Nested Loop (cost=27.05..35.61 rows=12 width=512) (actual time=0.072..0.305 rows=116 loops=114)
Buffers: shared hit=28556 read=8
I/O Timings: read=0.062
-> Nested Loop (cost=27.03..34.94 rows=12 width=133) (actual time=0.069..0.195 rows=116 loops=114)
Join Filter: (r_3.relnamespace = nr_3.oid)
Rows Removed by Join Filter: 464
Buffers: shared hit=2107 read=8
I/O Timings: read=0.062
-> Seq Scan on pg_namespace nr_3 (cost=0.00..1.02 rows=4 width=4) (actual time=0.001..0.004 rows=5 loops=114)
Filter: (NOT pg_is_other_temp_schema(oid))
Rows Removed by Filter: 2
Buffers: shared hit=114
-> Materialize (cost=27.03..33.64 rows=18 width=137) (actual time=0.004..0.010 rows=116 loops=570)
Buffers: shared hit=1993 read=8
I/O Timings: read=0.062
-> Hash Join (cost=27.03..33.62 rows=18 width=137) (actual time=2.042..2.092 rows=116 loops=1)
Hash Cond: (c_3.conrelid = r_3.oid)
Buffers: shared hit=1993 read=8
I/O Timings: read=0.062
-> Seq Scan on pg_constraint c_3 (cost=0.00..6.51 rows=147 width=73) (actual time=0.005..0.031 rows=118 loops=1)
" Filter: (contype <> ALL ('{t,x}'::""char""[]))"
Buffers: shared hit=6
-> Hash (cost=26.77..26.77 rows=74 width=72) (actual time=2.006..2.007 rows=38 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 12kB
Buffers: shared hit=1987 read=8
I/O Timings: read=0.062
-
[content dropped]
-> Nested Loop (cost=0.20..10.34 rows=1 width=132) (actual time=0.506..0.599 rows=3 loops=1)
" Join Filter: (""pgClass1"".oid = pg_attribute.attrelid)"
Buffers: shared hit=80
-> Nested Loop (cost=0.14..9.13 rows=1 width=103) (actual time=0.054..0.071 rows=3 loops=1)
Buffers: shared hit=17
-> Nested Loop (cost=0.08..9.02 rows=1 width=103) (actual time=0.039..0.046 rows=3 loops=1)
Buffers: shared hit=7
" -> Index Scan using pg_class_relname_nsp_index on pg_class ""pgClass1"" (cost=0.06..4.06 rows=1 width=68) (actual time=0.012..0.014 rows=1 loops=1)"
Index Cond: (relname = 'projects'::name)
" Filter: ((relname !~~ 'pg%'::text) AND (relkind = 'r'::""char""))"
Buffers: shared hit=3
-> Index Scan using pg_index_indrelid_index on pg_index (cost=0.03..4.96 rows=1 width=35) (actual time=0.024..0.028 rows=3 loops=1)
" Index Cond: (indrelid = ""pgClass1"".oid)"
Filter: ((NOT indisprimary) AND indisunique)
Rows Removed by Filter: 4
Buffers: shared hit=4
" -> Index Only Scan using pg_class_oid_index on pg_class ""pgClass2"" (cost=0.06..0.10 rows=1 width=4) (actual time=0.007..0.007 rows=1 loops=3)"
Index Cond: (oid = pg_index.indexrelid)
Heap Fetches: 3
Buffers: shared hit=10
-> Index Scan using pg_attribute_relid_attnum_index on pg_attribute (cost=0.06..1.21 rows=1 width=70) (actual time=0.153..0.174 rows=1 loops=3)
Index Cond: (attrelid = pg_index.indrelid)
Filter: (attnum = ANY ((pg_index.indkey)::smallint[]))
Rows Removed by Filter: 65
Buffers: shared hit=63
Planning Time: 20.217 ms
Execution Time: 195.657 ms
如何在例如 10 秒后自动回滚每个事务?
此参数是否有效?
idle_in_transaction_session_timeout
它看起来不错,但它只会杀死空闲会话。如果它们太长,是否可以杀死活跃的事务?如果是,我该怎么做?
上下文是避免死锁,停止重载,带来稳定性等。当然,这是DB的“最后机会”证券化。它并不能取代所有好的实践,比如查看查询计划、设计索引、约束和事务内容等......
编辑:全局自动事务回滚不是一个好主意。我计划在客户端本身内部为长时间迁移事务插入超时应用程序端。
数据库具有以下缓存命中率:
table A: 0.006
table B: 0.955
table C: 0.023
表 A 和 C 是历史表。没有关系,内容大,不需要快速查询,只有很少的读取请求。我寻找一个功能来告诉 Postgres 忽略这些表的缓存,但徒劳无功。
是不是很简单,如果从数据库中删除表 A 和 C,它会自动增加表 B 的缓存命中率?(假设数据量相同)
一个表包含 300mB 的膨胀。它不到表记录的 20%。autovaccum 将在几天内将其清理干净,那时它可能是 350-400mB。磁盘空间不是问题。
出现这种膨胀对我的生产有何影响?似乎应该将其从缓存中逐出,因为它没有被查询,但是 RAM 中的膨胀也是如此吗?
它是否会影响延迟、cpu 使用率或磁盘空间以外的任何其他内容?
我是一个涉及 jsonb 请求的新手。
这个可以改进吗?集合是一个巨大的 jsonb 字段,也许一个交叉连接就足够了。
SELECT actions
FROM layouts
CROSS JOIN jsonb_array_elements(elements) AS element
CROSS JOIN jsonb_array_elements(element.value->'sub'->'actions') as actions
WHERE id = 124350001
AND actions->>'id' = '1234'
AND "deletedAt" IS NULL;
以下是“元素”字段值的示例:
{
"sub": { "actions": [{"id":"1234", "name": "one"},{"id":"45678", name: 'two'}] }
}
该请求应返回一个操作,例如:
{"id":"1234", "name": "one"}
需要克隆一些记录(同时进行字段更新)。我找到了三种方法,但不确定哪种方法最好。
解决方案1,加载应用程序:获取应用程序中的记录,更新字段,然后插入新记录。(但需要获取记录:()
解决方案 2,INSERT INTO SELECT:效果很好,但是当表发生变化时需要更新(例如新字段)
INSERT INTO my_table (field_a, field_b, field_c)
SELECT 42, mt.field_b, mt.field_c
FROM my_table as mt
WHERE lc.field_a = 45;
解决方案 3,临时表可以吗?
BEGIN;
SELECT * INTO TEMPORARY temp_my_table FROM my_table WHERE field_a = 45;
UPDATE temp_my_table SET field_a = 42;
INSERT INTO my_table SELECT * FROM temp_my_table;
DROP TABLE temp_my_table;
END;
假设有 10 到 1000 条记录,最好的解决方案是什么?
第三个想法可以并发执行吗?
欢迎任何想法。
我有一个使用节点 Postgres 驱动程序连接到我的 Postgres 12 的应用程序。
池大小当前为 1。
对于一个连接,是否可以在收到响应之前发送多个请求?
图二请求:
如果这是可能的,那么请求 B 会在请求 A 之前得到答复吗?
问题是编写下面定义的“ADD-VALUE”请求。
对于下表中的每个“ cat ”值,只保留 3 条记录。图这个表:
ID | 猫 | 价值 | 更新时间 |
---|---|---|---|
1 | 猫1 | v1 | 06/01/2021 00:00:01 |
2 | 猫1 | v2 | 2021 年 6 月 1 日 00:00:02 |
3 | 猫1 | v3 | 06/01/2021 00:00:03 (指针 cat1 在这里) |
4 | 猫2 | v1 | 06/01/2021 00:01:01 |
5 | 猫2 | v2 | 06/01/2021 00:01:02 (指针 cat2 在这里) |
插入案例:调用“ADD-VALUE(cat= cat2 , value=v3)”将产生粗体结果:
ID | 猫 | 价值 | 更新时间 |
---|---|---|---|
1 | 猫1 | v1 | 06/01/2021 00:00:01 |
2 | 猫1 | v2 | 2021 年 6 月 1 日 00:00:02 |
3 | 猫1 | v3 | 06/01/2021 00:00:03 (指针 cat1 在这里) |
4 | 猫2 | v1 | 06/01/2021 00:01:01 |
5 | 猫2 | v2 | 06/01/2021 00:01:02 |
6 | 猫2 | v3 | 06/01/2021 00:01:02 (指针 cat2 现在在这里) |
更新案例:调用“ADD-VALUE(cat= cat1 , value=v4)”将产生粗体结果:
ID | 猫 | 价值 | 更新时间 |
---|---|---|---|
1 | 猫1 | v4 | 07/01/2021 00:00:04 (指针 cat1 现在在这里) |
2 | 猫1 | v2 | 2021 年 6 月 1 日 00:00:02 |
3 | 猫1 | v3 | 2021 年 6 月 1 日 00:00:03 |
4 | 猫2 | v1 | 06/01/2021 00:01:01 |
5 | 猫2 | v2 | 06/01/2021 00:01:02 (指针 cat2 在这里) |
欢迎任何建议。一个请求中的 UPDATE 或 INSERT 可能是不可能的?我考虑使用 row-num 来计算每个类别的数量。
表 A 是:
id integer
version varchar
data jsonb (large data 1mb)
fkToBid integer (references B.id constraint)
表 B 是:
id integer
other...
进程正在积极地运行下面的两个更新,以任何顺序和任何事务之外。
表 A 中的更新记录有时会引用表 B 中的相同记录。此外,有时会更新相同的 A 记录。
UPDATE A.version WHERE A.id=:id
and
UPDATE A.data WHERE A.id=:id
为什么或可以出现这种僵局?是因为表 A 中的更新记录引用了表 B 中的同一行吗?这种僵局会不会有别的原因?
为什么我会在 B pk 索引上看到这些更新请求的AccessShareLock ?
我有两张桌子:“机器”和“魔法工具”。“magictools”指的是带有外键的“机器”。
在执行其中许多请求时,我遇到了死锁问题:
//this will produce an "AccessExclusiveLock" of type "tuple" on machines
SELECT FROM * machines FOR UPDATE where id = :id;
//this will produce a "RowExclusiveLock" on magictools and a "RowShareLock" on machines
UPDATE magictools SET collections = "large-json" where id = :id
据我了解,运行其中许多请求会产生死锁。也许只是执行此操作的更新,我不知道。在这种情况下我应该如何避免死锁?
我在这些表上有很多索引,也许我有太多索引?
以下是问题发生时 pg_activity 的报告。我不明白不同的模式和锁类型,只是,这里发生了什么?是否有可能只是没有任何事务的更新导致死锁?
绘制一个包含两列的表格:id
,catId
如何选择与= 3的记录相同catId
的记录?id
例子:
id catId
1 3
2 4
3 4
4 4
5 3
输入:3
那么 {id = 3} 的 catId 为 4
然后 {catId = 4} 的记录数为 3
输出:3条记录
图两个表:环境和分支。
它们的关系是:一个环境有很多分支。这很容易在分支表中使用环境外键。
问题在于添加此约束:环境分支的零个或一个分支可以具有“活动”状态。
系统将支持这两个请求:
一个解决方案是在环境表中添加“activeBranchFK”,但这会在表之间产生循环依赖,并且看起来不是一个好的解决方案。
另一种解决方案是分支表中的布尔“活动”,但在我看来,如果同一环境的许多分支同时处于活动状态,我们可能会有不希望的状态。
你有没有为这种模式建模?
表“日志”(请求 A FROM)中有很多行。
请求 A 很快,请求 B 很快。
放在一起,Request A WHERE IN Request B 很长。这正常吗?
请求 A 非常快:
--request A
SELECT "rid", max("createdAt") as "createdAt"
FROM "logs"
WHERE "rid" IN (17,71,196,187,111,86,108,81,54,184,245,27,118,100,175,136,130,67,45)
GROUP BY "rid";
请求 B 非常快:
--request B
SELECT "dr"."rid"
FROM (
SELECT *
FROM (
SELECT "eid", "tid", count(*) over (partition by "eid", "tid") count, id as "rid"
FROM rs
WHERE "deletedAt" is NULL
) "nr"
WHERE count > 1
) "dr"
INNER JOIN teams ON teams.id = "dr"."tid"
INNER JOIN projects ON projects.id = teams."pid"
ORDER BY "eid", "tid"
B 的结果与请求 A WHERE IN 中的数字列表相同。
用请求B替换请求A WHERE IN,它变得非常慢。
Postgres:9.5.6
图有以下数据的表,如何排除“单独”值?
id; data
(1, 'foo'),
(2, 'foo'),
(3, 'foo'),
(4, 'bar'),
(5, 'bar');
(6, 'jak');
我试试这个:
select id, data, row_number() over (partition by data)
from t;
-- RESULT
id data row_number
4 bar 1
5 bar 2
1 foo 1
2 foo 2
3 foo 3
6 jak 1
-- EXPECTED (exclude partition with one row)
id data row_number
4 bar 1
5 bar 2
1 foo 1
2 foo 2
3 foo 3
row_number()
为每个分区中的行提供索引.
如何按分区获得行数?某种WHERE partition_count > 1
.
不确定“undouble”一词,我只是想在列上添加一个唯一索引,但它包含 doublon,所以我需要先更新数据。
图这个数据:
更新前
1;foo
2;foo
3;foo
4;bar
5;bar
6;anyother
更新后
1;foo0
2;foo1
3;foo2
4;bar0
5;bar1
6;anyother
请注意,没有重复的值是不变的。
我的数据库是 Postgres 9.5.6