AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-44216

Fake Name's questions

Martin Hope
Fake Name
Asked: 2018-11-04 17:12:04 +0800 CST

Postgresql 无法使用我的覆盖索引并退回到更慢的位图扫描

  • 6

我试图弄清楚为什么我的一个表在索引扫描速度显着加快时使用位图堆扫描。

桌子:

webarchive=# \d web_pages
                                               Table "public.web_pages"
      Column       |            Type             |                              Modifiers
-------------------+-----------------------------+---------------------------------------------------------------------
 id                | bigint                      | not null default nextval('web_pages_id_seq'::regclass)
 state             | dlstate_enum                | not null
 errno             | integer                     |
 url               | text                        | not null
 starturl          | text                        | not null
 netloc            | text                        | not null
 file              | bigint                      |
 priority          | integer                     | not null
 distance          | integer                     | not null
 is_text           | boolean                     |
 limit_netloc      | boolean                     |
 title             | citext                      |
 mimetype          | text                        |
 type              | itemtype_enum               |
 content           | text                        |
 fetchtime         | timestamp without time zone |
 addtime           | timestamp without time zone |
 normal_fetch_mode | boolean                     | default true
 ignoreuntiltime   | timestamp without time zone | not null default '1970-01-01 00:00:00'::timestamp without time zone
Indexes:
    "web_pages_pkey" PRIMARY KEY, btree (id)
    "ix_web_pages_url" UNIQUE, btree (url)
    "ix_web_pages_distance" btree (distance)
    "ix_web_pages_fetchtime" btree (fetchtime)
    "ix_web_pages_id" btree (id)
    "ix_web_pages_id_state" btree (id, state)
    "ix_web_pages_netloc" btree (netloc)
    "ix_web_pages_priority" btree (priority)
    "ix_web_pages_state" btree (state)
    "web_pages_netloc_fetchtime_idx" btree (netloc, fetchtime)
    "web_pages_netloc_id_idx" btree (netloc, id)
Foreign-key constraints:
    "web_pages_file_fkey" FOREIGN KEY (file) REFERENCES web_files(id)
Tablespace: "main_webarchive_tablespace"

询问:

EXPLAIN ANALYZE UPDATE
    web_pages
SET
    state = 'new'
WHERE
    (state = 'fetching' OR state = 'processing')
AND
    id <= 150000000;

在这种情况下,因为我有一个覆盖索引 ( ix_web_pages_id_state),所以我希望查询规划器只进行索引扫描。但是,它生成的是位图堆扫描,速度要慢得多:

                                                                          QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=524.06..532.09 rows=2 width=671) (actual time=2356.900..2356.900 rows=0 loops=1)
   ->  Bitmap Heap Scan on web_pages  (cost=524.06..532.09 rows=2 width=671) (actual time=2356.896..2356.896 rows=0 loops=1)
         Recheck Cond: (((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum)) AND (id <= 150000000))
         Heap Blocks: exact=6
         ->  BitmapAnd  (cost=524.06..524.06 rows=2 width=0) (actual time=2353.388..2353.388 rows=0 loops=1)
               ->  BitmapOr  (cost=151.98..151.98 rows=6779 width=0) (actual time=2021.635..2021.636 rows=0 loops=1)
                     ->  Bitmap Index Scan on ix_web_pages_state  (cost=0.00..147.41 rows=6779 width=0) (actual time=2021.616..2021.617 rows=11668131 loops=1)
                           Index Cond: (state = 'fetching'::dlstate_enum)
                     ->  Bitmap Index Scan on ix_web_pages_state  (cost=0.00..4.57 rows=1 width=0) (actual time=0.015..0.016 rows=0 loops=1)
                           Index Cond: (state = 'processing'::dlstate_enum)
               ->  Bitmap Index Scan on web_pages_pkey  (cost=0.00..371.83 rows=16435 width=0) (actual time=0.046..0.047 rows=205 loops=1)
                     Index Cond: (id <= 150000000)
 Planning time: 0.232 ms
 Execution time: 2406.234 ms
(14 rows)

如果我强制它不进行位图堆扫描(通过set enable_bitmapscan to off;),它会生成一个更快的计划:

                                                              QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=0.56..38591.75 rows=2 width=671) (actual time=0.284..0.285 rows=0 loops=1)
   ->  Index Scan using web_pages_pkey on web_pages  (cost=0.56..38591.75 rows=2 width=671) (actual time=0.281..0.281 rows=0 loops=1)
         Index Cond: (id <= 150000000)
         Filter: ((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum))
         Rows Removed by Filter: 181
 Planning time: 0.190 ms
 Execution time: 0.334 ms
(7 rows)

我重新运行了一次真空分析,看看表统计信息是否可能已过时,但这似乎没有任何好处。另外,以上是在多次重新运行相同的查询之后,所以我认为缓存也不应该是相关的。

我怎样才能诱导规划器在这里生成一个性能更好的计划?


编辑:正如评论中所建议的,我添加了一个 index "ix_web_pages_state_id" btree (state, id)。不幸的是,它没有帮助。

我还尝试过减少random_page_cost(低至 0.5)以及增加统计目标,但都没有任何效果。


进一步编辑 - 删除 OR 条件:

EXPLAIN ANALYZE UPDATE
    web_pages
SET
    state = 'new'
WHERE
    state = 'fetching'
AND
    id <= 150000000;

产量:

                                                                       QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=311.83..315.84 rows=1 width=589) (actual time=2574.654..2574.655 rows=0 loops=1)
   ->  Bitmap Heap Scan on web_pages  (cost=311.83..315.84 rows=1 width=589) (actual time=2574.650..2574.651 rows=0 loops=1)
         Recheck Cond: ((id <= 150000000) AND (state = 'fetching'::dlstate_enum))
         Heap Blocks: exact=6
         ->  BitmapAnd  (cost=311.83..311.83 rows=1 width=0) (actual time=2574.556..2574.556 rows=0 loops=1)
               ->  Bitmap Index Scan on web_pages_pkey  (cost=0.00..49.60 rows=1205 width=0) (actual time=0.679..0.680 rows=726 loops=1)
                     Index Cond: (id <= 150000000)
               ->  Bitmap Index Scan on ix_web_pages_state  (cost=0.00..261.98 rows=7122 width=0) (actual time=2519.950..2519.951 rows=11873888 loops=1)
                     Index Cond: (state = 'fetching'::dlstate_enum)

进一步编辑 - MOAR WEIRDNESS:

我重写了查询以使用子查询:

EXPLAIN ANALYZE UPDATE
    web_pages
SET
    state = 'new'
WHERE
    (state = 'fetching' OR state = 'processing')
AND
    id IN (
        SELECT 
            id 
        FROM 
            web_pages 
        WHERE 
            id <= 150000000
    );

这会产生一个最终的执行计划,该计划迄今为止优于所有其他执行计划。有时:

                                                                        QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=1.12..13878.31 rows=1 width=595) (actual time=2.773..2.774 rows=0 loops=1)
   ->  Nested Loop  (cost=1.12..13878.31 rows=1 width=595) (actual time=2.772..2.773 rows=0 loops=1)
         ->  Index Scan using web_pages_pkey on web_pages web_pages_1  (cost=0.56..3533.34 rows=1205 width=14) (actual time=0.000..0.602 rows=181 loops=1)
               Index Cond: (id <= 150000000)
         ->  Index Scan using web_pages_pkey on web_pages  (cost=0.56..8.58 rows=1 width=585) (actual time=0.010..0.010 rows=0 loops=181)
               Index Cond: (id = web_pages_1.id)
               Filter: ((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum))
               Rows Removed by Filter: 1
 Planning time: 0.891 ms
 Execution time: 2.894 ms
(10 rows)

Update on web_pages  (cost=21193.19..48917.78 rows=2 width=595)
  ->  Hash Semi Join  (cost=21193.19..48917.78 rows=2 width=595)
        Hash Cond: (web_pages.id = web_pages_1.id)
        ->  Bitmap Heap Scan on web_pages  (cost=270.14..27976.00 rows=7126 width=585)
              Recheck Cond: ((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum))
              ->  BitmapOr  (cost=270.14..270.14 rows=7126 width=0)
                    ->  Bitmap Index Scan on ix_web_pages_state  (cost=0.00..262.01 rows=7126 width=0)
                          Index Cond: (state = 'fetching'::dlstate_enum)
                    ->  Bitmap Index Scan on ix_web_pages_state  (cost=0.00..4.57 rows=1 width=0)
                          Index Cond: (state = 'processing'::dlstate_enum)
        ->  Hash  (cost=20834.15..20834.15 rows=7112 width=14)
              ->  Index Scan using web_pages_pkey on web_pages web_pages_1  (cost=0.56..20834.15 rows=7112 width=14)
                    Index Cond: ((id > 1883250000) AND (id <= 1883300000))

在这一点上,我不知道发生了什么。我所知道的是每个案例都是由set enable_bitmapscan to off;.


好的,我昨晚运行的超长时间运行事务完成了,我设法VACUUM VERBOSE ANALYZE在桌面上运行了一个:

webarchive=# VACUUM ANALYZE VERBOSE web_pages;
INFO:  vacuuming "public.web_pages"
INFO:  scanned index "ix_web_pages_distance" to remove 33328301 row versions
DETAIL:  CPU 6.85s/21.21u sec elapsed 171.28 sec
INFO:  scanned index "ix_web_pages_fetchtime" to remove 33328301 row versions
DETAIL:  CPU 6.20s/25.28u sec elapsed 186.53 sec
INFO:  scanned index "ix_web_pages_id" to remove 33328301 row versions
DETAIL:  CPU 7.37s/29.56u sec elapsed 226.49 sec
INFO:  scanned index "ix_web_pages_netloc" to remove 33328301 row versions
DETAIL:  CPU 8.47s/41.44u sec elapsed 260.50 sec
INFO:  scanned index "ix_web_pages_priority" to remove 33328301 row versions
DETAIL:  CPU 5.65s/16.35u sec elapsed 180.78 sec
INFO:  scanned index "ix_web_pages_state" to remove 33328301 row versions
DETAIL:  CPU 4.51s/21.14u sec elapsed 189.60 sec
INFO:  scanned index "ix_web_pages_url" to remove 33328301 row versions
DETAIL:  CPU 26.59s/78.52u sec elapsed 969.99 sec
INFO:  scanned index "web_pages_netloc_fetchtime_idx" to remove 33328301 row versions
DETAIL:  CPU 8.23s/48.39u sec elapsed 301.37 sec
INFO:  scanned index "web_pages_netloc_id_idx" to remove 33328301 row versions
DETAIL:  CPU 15.52s/43.25u sec elapsed 423.68 sec
INFO:  scanned index "web_pages_pkey" to remove 33328301 row versions
DETAIL:  CPU 8.12s/33.43u sec elapsed 215.93 sec
INFO:  scanned index "ix_web_pages_id_state" to remove 33328301 row versions
DETAIL:  CPU 8.22s/33.26u sec elapsed 214.43 sec
INFO:  scanned index "ix_web_pages_state_id" to remove 33328301 row versions
DETAIL:  CPU 6.01s/28.04u sec elapsed 174.19 sec
INFO:  "web_pages": removed 33328301 row versions in 3408348 pages
DETAIL:  CPU 89.90s/50.24u sec elapsed 1928.70 sec
INFO:  index "ix_web_pages_distance" now contains 29463963 row versions in 215671 pages
DETAIL:  33328301 index row versions were removed.
32914 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_fetchtime" now contains 29463982 row versions in 253375 pages
DETAIL:  33328301 index row versions were removed.
40460 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_id" now contains 29464000 row versions in 238212 pages
DETAIL:  33328301 index row versions were removed.
21081 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_netloc" now contains 29464025 row versions in 358150 pages
DETAIL:  33328301 index row versions were removed.
99235 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_priority" now contains 29464032 row versions in 214923 pages
DETAIL:  33328301 index row versions were removed.
21451 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_state" now contains 29466359 row versions in 215150 pages
DETAIL:  33328301 index row versions were removed.
81340 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_url" now contains 29466350 row versions in 1137027 pages
DETAIL:  33197635 index row versions were removed.
236405 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "web_pages_netloc_fetchtime_idx" now contains 29466381 row versions in 539255 pages
DETAIL:  33328301 index row versions were removed.
220594 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "web_pages_netloc_id_idx" now contains 29466392 row versions in 501276 pages
DETAIL:  33328301 index row versions were removed.
144217 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "web_pages_pkey" now contains 29466394 row versions in 236560 pages
DETAIL:  33173411 index row versions were removed.
20559 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_id_state" now contains 29466415 row versions in 256699 pages
DETAIL:  33328301 index row versions were removed.
27194 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_web_pages_state_id" now contains 29466435 row versions in 244076 pages
DETAIL:  33328301 index row versions were removed.
91918 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "web_pages": found 33339704 removable, 29367176 nonremovable row versions in 4224021 out of 4231795 pages
DETAIL:  2541 dead row versions cannot be removed yet.
There were 2079389 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 330.54s/537.34u sec elapsed 7707.90 sec.
INFO:  vacuuming "pg_toast.pg_toast_705758310"
INFO:  scanned index "pg_toast_705758310_index" to remove 7184381 row versions
DETAIL:  CPU 7.32s/13.70u sec elapsed 240.71 sec
INFO:  "pg_toast_705758310": removed 7184381 row versions in 2271192 pages
DETAIL:  CPU 62.81s/46.41u sec elapsed 1416.12 sec
INFO:  index "pg_toast_705758310_index" now contains 114558558 row versions in 338256 pages
DETAIL:  7184381 index row versions were removed.
2033 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_705758310": found 7184381 removable, 40907769 nonremovable row versions in 11388831 out of 29033065 pages
DETAIL:  5 dead row versions cannot be removed yet.
There were 74209 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 433.26s/247.73u sec elapsed 8444.85 sec.
INFO:  analyzing "public.web_pages"
INFO:  "web_pages": scanned 600000 of 4232727 pages, containing 4191579 live rows and 4552 dead rows; 600000 rows in sample, 29569683 estimated total rows
VACUUM

它仍在生成非仅索引查询,尽管执行时间更符合仅索引查询。我不明白为什么行为发生了如此大的变化。运行很长时间的查询会导致那么大的开销吗?

webarchive=# EXPLAIN ANALYZE UPDATE
        web_pages
    SET
        state = 'new'
    WHERE
        (state = 'fetching' OR state = 'processing')
    AND
        id IN (
            SELECT
                id
            FROM
                web_pages
            WHERE
                id > 1883250000
            AND
                id <= 1883300000
        );
                                                                         QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=36.00..9936.00 rows=1 width=594) (actual time=37.856..37.857 rows=0 loops=1)
   ->  Nested Loop Semi Join  (cost=36.00..9936.00 rows=1 width=594) (actual time=37.852..37.853 rows=0 loops=1)
         ->  Bitmap Heap Scan on web_pages  (cost=35.44..3167.00 rows=788 width=584) (actual time=23.984..31.489 rows=2321 loops=1)
               Recheck Cond: ((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum))
               Heap Blocks: exact=2009
               ->  BitmapOr  (cost=35.44..35.44 rows=788 width=0) (actual time=22.347..22.348 rows=0 loops=1)
                     ->  Bitmap Index Scan on ix_web_pages_state  (cost=0.00..30.47 rows=788 width=0) (actual time=22.326..22.327 rows=9202 loops=1)
                           Index Cond: (state = 'fetching'::dlstate_enum)
                     ->  Bitmap Index Scan on ix_web_pages_state_id  (cost=0.00..4.57 rows=1 width=0) (actual time=0.017..0.017 rows=0 loops=1)
                           Index Cond: (state = 'processing'::dlstate_enum)
         ->  Index Scan using ix_web_pages_id_state on web_pages web_pages_1  (cost=0.56..8.58 rows=1 width=14) (actual time=0.001..0.001 rows=0 loops=2321)
               Index Cond: ((id = web_pages.id) AND (id > 1883250000) AND (id <= 1883300000))
 Planning time: 2.677 ms
 Execution time: 37.945 ms
(14 rows)

有趣的是,ID 偏移量的值似乎会影响规划:

webarchive=# EXPLAIN ANALYZE UPDATE
        web_pages
    SET
        state = 'new'
    WHERE
        (state = 'fetching' OR state = 'processing')
    AND
        id IN (
            SELECT
                id
            FROM
                web_pages
            WHERE
                id >  149950000
            AND
                id <= 150000000
        );
                                                                        QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=1.12..17.18 rows=1 width=594) (actual time=0.030..0.031 rows=0 loops=1)
   ->  Nested Loop  (cost=1.12..17.18 rows=1 width=594) (actual time=0.026..0.028 rows=0 loops=1)
         ->  Index Scan using ix_web_pages_id_state on web_pages web_pages_1  (cost=0.56..8.58 rows=1 width=14) (actual time=0.022..0.024 rows=0 loops=1)
               Index Cond: ((id > 149950000) AND (id <= 150000000))
         ->  Index Scan using ix_web_pages_id_state on web_pages  (cost=0.56..8.59 rows=1 width=584) (never executed)
               Index Cond: (id = web_pages_1.id)
               Filter: ((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum))
 Planning time: 1.531 ms
 Execution time: 0.155 ms
(9 rows)

查询规划器是否在其规划中考虑了查询参数的值?我原以为规划将与查询参数无关,但现在考虑它,使用参数来改进规划是有意义的,所以我可以看到它以这种方式工作。

有趣的是,位图扫描现在的性能似乎要好得多;

webarchive=# set enable_bitmapscan to off;
SET
webarchive=#     EXPLAIN ANALYZE UPDATE
        web_pages
    SET
        state = 'new'
    WHERE
        (state = 'fetching' OR state = 'processing')
    AND
        id IN (
            SELECT
                id
            FROM
                web_pages
            WHERE
                id > 1883250000
            AND
                id <= 1883300000
        );
                                                                          QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=1.12..82226.59 rows=1 width=594) (actual time=66.993..66.994 rows=0 loops=1)
   ->  Nested Loop  (cost=1.12..82226.59 rows=1 width=594) (actual time=66.992..66.993 rows=0 loops=1)
         ->  Index Scan using web_pages_pkey on web_pages web_pages_1  (cost=0.56..21082.82 rows=7166 width=14) (actual time=0.055..20.206 rows=8567 loops=1)
               Index Cond: ((id > 1883250000) AND (id <= 1883300000))
         ->  Index Scan using web_pages_pkey on web_pages  (cost=0.56..8.52 rows=1 width=584) (actual time=0.004..0.004 rows=0 loops=8567)
               Index Cond: (id = web_pages_1.id)
               Filter: ((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum))
               Rows Removed by Filter: 1
 Planning time: 1.963 ms
 Execution time: 67.112 ms
(10 rows)

webarchive=# set enable_bitmapscan to on;
SET
webarchive=#     EXPLAIN ANALYZE UPDATE
        web_pages
    SET
        state = 'new'
    WHERE
        (state = 'fetching' OR state = 'processing')
    AND
        id IN (
            SELECT
                id
            FROM
                web_pages
            WHERE
                id > 1883250000
            AND
                id <= 1883300000
        );
                                                                         QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
 Update on web_pages  (cost=36.00..9936.00 rows=1 width=594) (actual time=23.331..23.331 rows=0 loops=1)
   ->  Nested Loop Semi Join  (cost=36.00..9936.00 rows=1 width=594) (actual time=23.327..23.328 rows=0 loops=1)
         ->  Bitmap Heap Scan on web_pages  (cost=35.44..3167.00 rows=788 width=584) (actual time=6.727..17.027 rows=1966 loops=1)
               Recheck Cond: ((state = 'fetching'::dlstate_enum) OR (state = 'processing'::dlstate_enum))
               Heap Blocks: exact=3825
               ->  BitmapOr  (cost=35.44..35.44 rows=788 width=0) (actual time=3.499..3.499 rows=0 loops=1)
                     ->  Bitmap Index Scan on ix_web_pages_state  (cost=0.00..30.47 rows=788 width=0) (actual time=3.471..3.472 rows=21996 loops=1)
                           Index Cond: (state = 'fetching'::dlstate_enum)
                     ->  Bitmap Index Scan on ix_web_pages_state_id  (cost=0.00..4.57 rows=1 width=0) (actual time=0.022..0.023 rows=0 loops=1)
                           Index Cond: (state = 'processing'::dlstate_enum)
         ->  Index Scan using ix_web_pages_id_state on web_pages web_pages_1  (cost=0.56..8.58 rows=1 width=14) (actual time=0.001..0.001 rows=0 loops=1966)
               Index Cond: ((id = web_pages.id) AND (id > 1883250000) AND (id <= 1883300000))
 Planning time: 0.774 ms
 Execution time: 23.425 ms
(14 rows)

So I think the issue was just the index having LOTS of rows that were no longer valid, and the process of filtering those was the primary time cost. The underlying issue here is, (I think) the way the MVCC system interacts with the VACUUM system in the context of extremely long running transactions.

It would make sense (in retrospect), that entries cannot be removed from an index until every single transaction that could use that index has been completed. From the documentation:

But there is an additional requirement for any table scan in PostgreSQL: it must verify that each retrieved row be "visible" to the query's MVCC snapshot, as discussed in Chapter 13. Visibility information is not stored in index entries, only in heap entries; so at first glance it would seem that every row retrieval would require a heap access anyway. And this is indeed the case, if the table row has been modified recently. However, for seldom-changing data there is a way around this problem.

In this case, I started a db dump, and then went on and did a bunch of cleanup (which involved a LOT of row churn). That would lead to lots of heap lookups for each index query, since the index contained lots of now-deleted rows.

This is mostly hypothetical, though, since I don't have the resources to try to recreate the situation.

Anyways, @jjanes's hint about long-running queries was the key to finding my way down the rabbit hole here.

postgresql performance
  • 1 个回答
  • 353 Views
Martin Hope
Fake Name
Asked: 2018-10-22 20:30:40 +0800 CST

VACUUM 没有减少报告的数据库大小?

  • 1

我有一个在 postgresql 中有一个非常大的表的数据库。

我知道缩小磁盘大小的唯一方法是VACUUM FULL,但我不能这样做,因为我没有足够的可用空间(这是 1TB 磁盘上的 920 GB 表,我买不起另一个 1TB SSD ATM)。

但是,我确实运行了VACUUM VERBOSE ANALYZE web_pages,它完成了,但是表大小(如 中所报告的psql)根本没有减小。

基本上,有没有办法在没有VACUUM FULL或完全转储/加载的情况下缩小表?我有转储/加载的空间,但在这一点上,我预计需要一周以上的时间。

真空输出:

webarchive=# VACUUM VERBOSE ANALYZE web_pages;
INFO:  vacuuming "public.web_pages"
INFO:  scanned index "ix_web_pages_distance_filtered" to remove 145580643 row versions
DETAIL:  CPU 4.46s/165.77u sec elapsed 324.63 sec
INFO:  scanned index "ix_web_pages_netloc" to remove 145580643 row versions
DETAIL:  CPU 40.65s/4686.88u sec elapsed 5387.13 sec
INFO:  scanned index "ix_web_pages_priority" to remove 145580643 row versions
DETAIL:  CPU 29.59s/1018.71u sec elapsed 1452.67 sec
INFO:  scanned index "ix_web_pages_state" to remove 145580643 row versions
DETAIL:  CPU 22.08s/303.12u sec elapsed 712.94 sec
INFO:  scanned index "ix_web_pages_url" to remove 145580643 row versions
DETAIL:  CPU 283.45s/673.39u sec elapsed 7583.39 sec
INFO:  scanned index "web_pages_pkey" to remove 145580643 row versions
DETAIL:  CPU 51.69s/90.19u sec elapsed 1461.37 sec
INFO:  scanned index "ix_web_pages_id" to remove 145580643 row versions
DETAIL:  CPU 63.13s/99.77u sec elapsed 1529.22 sec
INFO:  scanned index "web_pages_netloc_fetchtime_idx" to remove 145580643 row versions
DETAIL:  CPU 77.04s/5080.52u sec elapsed 6287.14 sec
INFO:  scanned index "id_web_pages_id_state" to remove 145580643 row versions
DETAIL:  CPU 64.52s/107.81u sec elapsed 1695.07 sec
INFO:  scanned index "web_pages_fetchtime_idx" to remove 145580643 row versions
DETAIL:  CPU 12.06s/99.66u sec elapsed 408.36 sec
INFO:  "web_pages": removed 145580643 row versions in 8584664 pages
DETAIL:  CPU 226.70s/140.17u sec elapsed 5019.28 sec
INFO:  index "ix_web_pages_distance_filtered" now contains 16007295 row versions in 814166 pages
DETAIL:  38738938 index row versions were removed.
570268 index pages have been deleted, 385915 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.02 sec.
INFO:  index "ix_web_pages_netloc" now contains 27370778 row versions in 3181634 pages
DETAIL:  67244989 index row versions were removed.
2669376 index pages have been deleted, 1876620 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  index "ix_web_pages_priority" now contains 27370960 row versions in 2006220 pages
DETAIL:  67218177 index row versions were removed.
1056657 index pages have been deleted, 786603 are currently reusable.
CPU 0.01s/0.00u sec elapsed 0.03 sec.
INFO:  index "ix_web_pages_state" now contains 27370969 row versions in 1532024 pages
DETAIL:  67244989 index row versions were removed.
986826 index pages have been deleted, 700367 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  index "ix_web_pages_url" now contains 27382514 row versions in 7555366 pages
DETAIL:  78562001 index row versions were removed.
4290425 index pages have been deleted, 225461 are currently reusable.
CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  index "web_pages_pkey" now contains 27401242 row versions in 2421605 pages
DETAIL:  78000787 index row versions were removed.
1068399 index pages have been deleted, 373558 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  index "ix_web_pages_id" now contains 27411627 row versions in 2874706 pages
DETAIL:  82612172 index row versions were removed.
1290296 index pages have been deleted, 442226 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  index "web_pages_netloc_fetchtime_idx" now contains 27556711 row versions in 4482440 pages
DETAIL:  80962513 index row versions were removed.
3373490 index pages have been deleted, 1873800 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.02 sec.
INFO:  index "id_web_pages_id_state" now contains 27558627 row versions in 3094617 pages
DETAIL:  81497647 index row versions were removed.
1735454 index pages have been deleted, 631419 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  index "web_pages_fetchtime_idx" now contains 27559941 row versions in 656103 pages
DETAIL:  67710984 index row versions were removed.
228974 index pages have been deleted, 95938 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "web_pages": found 32297714 removable, 26459019 nonremovable row versions in 14298550 out of 14827067 pages
DETAIL:  1671 dead row versions cannot be removed yet.
There were 378926914 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 1149.21s/12598.17u sec elapsed 35893.00 sec.
INFO:  vacuuming "pg_toast.pg_toast_38269400"
INFO:  scanned index "pg_toast_38269400_index" to remove 178956680 row versions
DETAIL:  CPU 33.85s/139.43u sec elapsed 774.95 sec
INFO:  "pg_toast_38269400": removed 178956680 row versions in 47342563 pages
DETAIL:  CPU 1267.31s/752.22u sec elapsed 22404.29 sec
INFO:  scanned index "pg_toast_38269400_index" to remove 162873580 row versions
DETAIL:  CPU 20.65s/43.54u sec elapsed 216.38 sec
INFO:  "pg_toast_38269400": removed 162873580 row versions in 39900140 pages
DETAIL:  CPU 1085.52s/716.33u sec elapsed 13775.48 sec
INFO:  index "pg_toast_38269400_index" now contains 91453965 row versions in 1622691 pages
DETAIL:  341830260 index row versions were removed.
540140 index pages have been deleted, 1626 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.02 sec.
INFO:  "pg_toast_38269400": found 275718152 removable, 85526893 nonremovable row versions in 102611808 out of 104048880 pages
DETAIL:  1031 dead row versions cannot be removed yet.
There were 14286891 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 4786.16s/3240.77u sec elapsed 79646.66 sec.
INFO:  analyzing "public.web_pages"
INFO:  "web_pages": scanned 90000 of 14840002 pages, containing 166193 live rows and 1769 dead rows; 90000 rows in sample, 27403383 estimated total rows
VACUUM
webarchive=#

之前的尺寸报告:

webarchive=# \d+
                                List of relations
 Schema |                Name      |   Type   |    Owner    |    Size    | Description
--------+--------------------------+----------+-------------+------------+-------------
..... 
public | web_pages                | table    | webarchuser | 920 GB     |
.....

后:

webarchive=# \d+
                                List of relations
 Schema |                Name      |   Type   |    Owner    |    Size    | Description
--------+--------------------------+----------+-------------+------------+-------------
 ........
 public | web_pages                | table    | webarchuser | 920 GB     |
 ........

我意识到这里的“正确”解决方案是更大的磁盘,但这是一个爱好项目(尽管规模非常大),我只是没有钱购买更大的 SSD 存储。

postgresql maintenance
  • 3 个回答
  • 3617 Views
Martin Hope
Fake Name
Asked: 2016-04-24 23:58:07 +0800 CST

`ON CONFLICT DO UPDATE` 导致死锁?

  • 2

我有一个项目,我正在尝试使用 PostgreSQLON CONFLICT DO UPDATE子句,但我遇到了大量的死锁问题。

我的架构如下:

webarchive=# \d web_pages
                                               Table "public.web_pages"
      Column       |            Type             |                              Modifiers
-------------------+-----------------------------+---------------------------------------------------------------------
 id                | integer                     | not null default nextval('web_pages_id_seq'::regclass)
 state             | dlstate_enum                | not null
 errno             | integer                     |
 url               | text                        | not null
 starturl          | text                        | not null
 netloc            | text                        | not null
 file              | integer                     |
 priority          | integer                     | not null
 distance          | integer                     | not null
 is_text           | boolean                     |
 limit_netloc      | boolean                     |
 title             | citext                      |
 mimetype          | text                        |
 type              | itemtype_enum               |
 content           | text                        |
 fetchtime         | timestamp without time zone |
 addtime           | timestamp without time zone |
 tsv_content       | tsvector                    |
 normal_fetch_mode | boolean                     | default true
 ignoreuntiltime   | timestamp without time zone | not null default '1970-01-01 00:00:00'::timestamp without time zone
Indexes:
    "web_pages_pkey" PRIMARY KEY, btree (id)
    "ix_web_pages_url" UNIQUE, btree (url)
    "idx_web_pages_title" gin (to_tsvector('english'::regconfig, title::text))
    "ix_web_pages_distance" btree (distance)
    "ix_web_pages_distance_filtered" btree (priority) WHERE state = 'new'::dlstate_enum AND distance < 1000000 AND normal_fetch_mode = true
    "ix_web_pages_id" btree (id)
    "ix_web_pages_netloc" btree (netloc)
    "ix_web_pages_priority" btree (priority)
    "ix_web_pages_state" btree (state)
    "ix_web_pages_url_ops" btree (url text_pattern_ops)
    "web_pages_state_netloc_idx" btree (state, netloc)
Foreign-key constraints:
    "web_pages_file_fkey" FOREIGN KEY (file) REFERENCES web_files(id)
Triggers:
    update_row_count_trigger BEFORE INSERT OR UPDATE ON web_pages FOR EACH ROW EXECUTE PROCEDURE web_pages_content_update_func()

我的更新命令如下:

INSERT INTO
    web_pages
    (url, starturl, netloc, distance, is_text, priority, type, fetchtime, state)
VALUES
    (:url, :starturl, :netloc, :distance, :is_text, :priority, :type, :fetchtime, :state)
ON CONFLICT (url) DO
    UPDATE
        SET
            state     = EXCLUDED.state,
            starturl  = EXCLUDED.starturl,
            netloc    = EXCLUDED.netloc,
            is_text   = EXCLUDED.is_text,
            distance  = EXCLUDED.distance,
            priority  = EXCLUDED.priority,
            fetchtime = EXCLUDED.fetchtime
        WHERE
            web_pages.fetchtime < :threshtime
        AND
            web_pages.url = EXCLUDED.url
    ;

(注意:参数通过SQLAlchemy参数化查询样式进行转义)

我看到了几十个死锁错误,即使在相对较轻的并发下(6 个工作人员):

Main.SiteArchiver.Process-5.MainThread - WARNING - SQLAlchemy OperationalError - Retrying.
Traceback (most recent call last):
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
    context)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
    cursor.execute(statement, parameters)
psycopg2.extensions.TransactionRollbackError: deadlock detected
DETAIL:  Process 11391 waits for ShareLock on transaction 40632808; blocked by process 11389.
Process 11389 waits for ShareLock on transaction 40632662; blocked by process 11391.
HINT:  See server log for query details.
CONTEXT:  while inserting index tuple (743427,2) in relation "web_pages"


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/media/Storage/Scripts/ReadableWebProxy/WebMirror/Engine.py", line 558, in upsertResponseLinks
    self.db_sess.execute(cmd, params=new)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/orm/session.py", line 1034, in execute
    bind, close_with_result=True).execute(clause, params or {})
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 914, in execute
    return meth(self, multiparams, params)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
    context)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
    exc_info
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 183, in reraise
    raise value.with_traceback(tb)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
    context)
  File "/media/Storage/Scripts/ReadableWebProxy/flask/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.extensions.TransactionRollbackError) deadlock detected
DETAIL:  Process 11391 waits for ShareLock on transaction 40632808; blocked by process 11389.
Process 11389 waits for ShareLock on transaction 40632662; blocked by process 11391.
HINT:  See server log for query details.
CONTEXT:  while inserting index tuple (743427,2) in relation "web_pages"
 [SQL: '         INSERT INTO          web_pages          (url, starturl, netloc, distance, is_text, priority, type, fetchtime, state)         VALUES          (%(url)s, %(starturl)s, %(netloc)s, %(distance)s, %(is_text)s, %(priority)s, %(type)s, %(fetchtime)s, %(state)s)         ON CONFLICT (url) DO          UPDATE           SET            state     = EXCLUDED.state,            starturl  = EXCLUDED.starturl,            netloc    = EXCLUDED.netloc,            is_text   = EXCLUDED.is_text,            distance  = EXCLUDED.distance,            priority  = EXCLUDED.priority,            fetchtime = EXCLUDED.fetchtime           WHERE            web_pages.fetchtime < %(threshtime)s          ;         '] [parameters: {'url': 'xxxxxx', 'is_text': True, 'netloc': 'xxxxxx', 'distance': 1000000, 'priority': 10000, 'threshtime': datetime.datetime(2016, 4, 24, 0, 38, 10, 778866), 'state': 'new', 'starturl': 'xxxxxxx', 'type': 'unknown', 'fetchtime': datetime.datetime(2016, 4, 24, 0, 38, 10, 778934)}]

我的事务隔离级别是REPEATABLE READ,所以我对数据库应该如何工作的理解是,我会看到很多序列化错误,但不应该发生死锁,因为如果两个事务更改同一行,后面的事务应该会失败。

我的猜测是 UPDATE 以某种方式锁定了 INSERT 查询(或类似的东西),我需要在某处放置一个同步点(?),但我不太了解各种查询组件的范围进行任何故障排除,然后只是随机更改内容并查看效果。我已经阅读了一些资料,但是 PostgreSQL 文档非常抽象,而且ON CONFLICT xxx术语似乎还没有被广泛使用,因此没有那么多资源可用于实际故障排除,尤其是对于非 SQL 专家。

我该如何尝试解决这个问题?我还尝试了其他隔离级别(READ COMMITTED, SERIALIZABLE)但无济于事。

deadlock locking
  • 1 个回答
  • 7159 Views
Martin Hope
Fake Name
Asked: 2015-09-30 18:30:29 +0800 CST

为什么我的 tsv 索引没有被使用?

  • 3

我正在尝试使 postgres 全文搜索功能正常运行。

我有两个表,一个是我为测试创建的,另一个是我希望能够搜索的实际表:

测试表:

webarchive=# \d test_sites
                            Table "public.test_sites"
   Column    |   Type   |                        Modifiers
-------------+----------+---------------------------------------------------------
 id          | integer  | not null default nextval('test_sites_id_seq'::regclass)
 content     | text     |
 tsv_content | tsvector |
Indexes:
    "test_sites_pkey" PRIMARY KEY, btree (id)
    "idx_test_web_pages_content" gin (tsv_content)
Triggers:
    web_pages_testing_content_change_trigger AFTER INSERT OR UPDATE ON test_sites FOR EACH ROW EXECUTE PROCEDURE web_pages_testing_content_update_func()

“真实”表:

webarchive=# \d web_pages
                                      Table "public.web_pages"
    Column    |            Type             |                       Modifiers
--------------+-----------------------------+--------------------------------------------------------
 id           | integer                     | not null default nextval('web_pages_id_seq'::regclass)
 state        | dlstate_enum                | not null
 errno        | integer                     |
 url          | text                        | not null
 starturl     | text                        | not null
 netloc       | text                        | not null
 file         | integer                     |
 priority     | integer                     | not null
 distance     | integer                     | not null
 is_text      | boolean                     |
 limit_netloc | boolean                     |
 title        | citext                      |
 mimetype     | text                        |
 type         | itemtype_enum               |
 raw_content  | text                        |
 content      | text                        |
 fetchtime    | timestamp without time zone |
 addtime      | timestamp without time zone |
 tsv_content  | tsvector                    |
Indexes:
    "web_pages_pkey" PRIMARY KEY, btree (id)
    "ix_web_pages_url" UNIQUE, btree (url)
    "idx_web_pages_content" gin (tsv_content)
    "idx_web_pages_title" gin (to_tsvector('english'::regconfig, title::text))
    "ix_web_pages_distance" btree (distance)
    "ix_web_pages_distance_filtered" btree (priority) WHERE state = 'new'::dlstate_enum AND distance < 1000000
    "ix_web_pages_priority" btree (priority)
    "ix_web_pages_type" btree (type)
    "ix_web_pages_url_ops" btree (url text_pattern_ops)
Foreign-key constraints:
    "web_pages_file_fkey" FOREIGN KEY (file) REFERENCES web_files(id)
Triggers:
    web_pages_content_change_trigger AFTER INSERT OR UPDATE ON web_pages FOR EACH ROW EXECUTE PROCEDURE web_pages_content_update_func()

除了额外的位,两者都有一个content列,以及一个tsv_content带有gin()索引的列。tsv_content每次修改列时,都会有一个触发器更新该列content。

请注意,另一个 gin索引工作正常,实际上我最初gin (to_tsvector('english'::regconfig, content::text))在内容列上也有一个索引,而不是第二列,但是在等待该索引在测试中重建几次之后,我决定使用一个单独的列来预先存储 tsvector 值。

对测试表执行查询使用索引,就像我期望的那样:

webarchive=# EXPLAIN ANALYZE SELECT
    test_sites.id,
    test_sites.content,
    ts_rank_cd(test_sites.tsv_content, to_tsquery($$testing$$)) AS ts_rank_cd_1
FROM
    test_sites
WHERE
    test_sites.tsv_content @@ to_tsquery($$testing$$);
                                                              QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on test_sites  (cost=16.45..114.96 rows=25 width=669) (actual time=0.175..3.720 rows=143 loops=1)
   Recheck Cond: (tsv_content @@ to_tsquery('testing'::text))
   Heap Blocks: exact=117
   ->  Bitmap Index Scan on idx_test_web_pages_content  (cost=0.00..16.44 rows=25 width=0) (actual time=0.109..0.109 rows=143 loops=1)
         Index Cond: (tsv_content @@ to_tsquery('testing'::text))
 Planning time: 0.414 ms
 Execution time: 3.800 ms
(7 rows)

然而,对真实表的完全相同的查询似乎只会导致普通的旧顺序扫描:

webarchive=# EXPLAIN ANALYZE SELECT
       web_pages.id,
       web_pages.content,
       ts_rank_cd(web_pages.tsv_content, to_tsquery($$testing$$)) AS ts_rank_cd_1
   FROM
       web_pages
   WHERE
       web_pages.tsv_content @@ to_tsquery($$testing$$);
                                                       QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
 Seq Scan on web_pages  (cost=0.00..4406819.80 rows=19751 width=505) (actual time=0.343..142325.954 rows=134949 loops=1)
   Filter: (tsv_content @@ to_tsquery('testing'::text))
   Rows Removed by Filter: 12764373
 Planning time: 0.436 ms
 Execution time: 142341.489 ms
(5 rows)

我已将我的工作内存增加到 3 GB 以查看是否是问题所在,但事实并非如此。

此外,应该注意的是,这些是相当大的表——大约 150GB 的文本跨越 400 万行(还有 800 万行,其中content/tsv_content是NULL)。

该test_sites表有 的 1/1000 行web_pages,因为当每个查询都需要几分钟时进行试验有点令人望而却步。


我正在使用 postgresql 9.5(是的,我自己编译的,我想要ON CONFLICT)。似乎还没有标签。

我已经通读了 9.5 的未决问题,我看不出这是任何问题的结果。


刚刚完全重建索​​引,问题仍然存在:

webarchive=# ANALYZE web_pages ;
ANALYZE
webarchive=# EXPLAIN ANALYZE SELECT
    web_pages.id,
    web_pages.content,
    ts_rank_cd(web_pages.tsv_content, to_tsquery($$testing$$)) AS ts_rank_cd_1
FROM
    web_pages
WHERE
    web_pages.tsv_content @@ to_tsquery($$testing$$);
                                                              QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on web_pages  (cost=10000000000.00..10005252343.30 rows=25109 width=561) (actual time=7.114..146444.168 rows=134949 loops=1)
   Filter: (tsv_content @@ to_tsquery('testing'::text))
   Rows Removed by Filter: 13137318
 Planning time: 0.521 ms
 Execution time: 146465.188 ms
(5 rows)

请注意,我实际上只是ANALYZE编辑,并且禁用了 seqscan。

postgresql full-text-search
  • 1 个回答
  • 640 Views
Martin Hope
Fake Name
Asked: 2015-09-08 14:48:48 +0800 CST

如何正确实施复合 greatest-n 过滤

  • 5

是的,更多 greatest-n-per-group 问题。

给定一个releases包含以下列的表:

 id         | primary key                 | 
 volume     | double precision            |
 chapter    | double precision            |
 series     | integer-foreign-key         |
 include    | boolean                     | not null

我想选择复合最大体积,然后选择一组系列的章节。

现在,如果我查询 per-distinct-series,我可以按如下方式轻松完成此操作:

SELECT 
       releases.chapter AS releases_chapter,
       releases.include AS releases_include,
       releases.series AS releases_series
FROM releases
WHERE releases.series = 741
  AND releases.include = TRUE
ORDER BY releases.volume DESC NULLS LAST, releases.chapter DESC NULLS LAST LIMIT 1;

但是,如果我有大量series(而且我有),这很快就会遇到效率问题,我会发出 100 多个查询来生成单个页面。

我想将整个事情滚动到一个查询中,我可以简单地说WHERE releases.series IN (1,2,3....),但我还没有想出如何说服 Postgres 让我这样做。

天真的方法是:

SELECT releases.volume AS releases_volume,
       releases.chapter AS releases_chapter,
       releases.series AS releases_series
FROM 
    releases
WHERE 
    releases.series IN (12, 17, 44, 79, 88, 110, 129, 133, 142, 160, 193, 231, 235, 295, 340, 484, 499, 
                        556, 581, 664, 666, 701, 741, 780, 790, 796, 874, 930, 1066, 1091, 1135, 1137, 
                        1172, 1331, 1374, 1418, 1435, 1447, 1471, 1505, 1521, 1540, 1616, 1702, 1768, 
                        1825, 1828, 1847, 1881, 2007, 2020, 2051, 2085, 2158, 2183, 2190, 2235, 2255, 
                        2264, 2275, 2325, 2333, 2334, 2337, 2341, 2343, 2348, 2370, 2372, 2376, 2606, 
                        2634, 2636, 2695, 2696 )
  AND releases.include = TRUE
GROUP BY 
    releases_series
ORDER BY releases.volume DESC NULLS LAST, releases.chapter DESC NULLS LAST;

这显然不起作用:

ERROR:  column "releases.volume" must appear in the 
        GROUP BY clause or be used in an aggregate function

如果没有GROUP BY,它确实会获取所有内容,并且通过一些简单的过程过滤它甚至可以工作,但必须有一种“正确”的方法在 SQL 中执行此操作。

跟踪错误并添加聚合:

SELECT max(releases.volume) AS releases_volume,
       max(releases.chapter) AS releases_chapter,
       releases.series AS releases_series
FROM 
    releases
WHERE 
    releases.series IN (12, 17, 44, 79, 88, 110, 129, 133, 142, 160, 193, 231, 235, 295, 340, 484, 499, 
                        556, 581, 664, 666, 701, 741, 780, 790, 796, 874, 930, 1066, 1091, 1135, 1137, 
                        1172, 1331, 1374, 1418, 1435, 1447, 1471, 1505, 1521, 1540, 1616, 1702, 1768, 
                        1825, 1828, 1847, 1881, 2007, 2020, 2051, 2085, 2158, 2183, 2190, 2235, 2255, 
                        2264, 2275, 2325, 2333, 2334, 2337, 2341, 2343, 2348, 2370, 2372, 2376, 2606, 
                        2634, 2636, 2695, 2696 )
  AND releases.include = TRUE
GROUP BY 
    releases_series;

大多数情况下有效,但问题是两个最大值不一致。如果我有两行,其中 volume:chapter 为 1:5 和 4:1,我需要返回 4:1,但独立最大值返回 4:5。

坦率地说,这在我的应用程序代码中实现起来非常简单,我必须在这里遗漏一些明显的东西。如何实现真正满足我要求的查询?

postgresql performance
  • 1 个回答
  • 98 Views
Martin Hope
Fake Name
Asked: 2014-09-08 02:02:41 +0800 CST

删除在 Psycopg2 中什么都不返回?

  • 4

我在与 via 交互的 PostgreSQL 数据库中有一个相当简单的删除查询psycopg2。

采取以下最小示例:

def testDelete():
    db = DbInterface()
    cur = db.conn.cursor()
    cur.execute("DELETE FROM munamelist WHERE name='something'")
    print("Results = ", cur.fetchall())

基本上,状态的PostgreSQL 文档DELETE:

成功完成后,DELETE 命令返回表单的命令标记

DELETE count

计数是删除的行数。请注意,当删除被 BEFORE DELETE 触发器抑制时,该数字可能小于与条件匹配的行数。如果 count 为 0,则查询没有删除任何行(这不被视为错误)。

但是,psycopg2当您尝试获取查询结果时会引发错误:

Traceback (most recent call last):
  File "autoOrganize.py", line 370, in <module>
    parseCommandLine()
  File "autoOrganize.py", line 363, in parseCommandLine
    testDelete()
  File "autoOrganize.py", line 247, in testDelete
    print("Results = ", cur.fetchall())
psycopg2.ProgrammingError: no results to fetch

无论该项目是否存在,您都无法获取查询结果。不psycopg2返回 SQL 的“命令标签”?

如果没有,我如何检索控制台界面中返回的更改行数?没关系,显然是最后/语句cursor.rowcount中修改的行数。DMLDQL

postgresql postgresql-9.3
  • 2 个回答
  • 10002 Views
Martin Hope
Fake Name
Asked: 2014-09-01 17:32:18 +0800 CST

PostgreSQL 中的事务是通过“psycopg2”每个光标还是每个连接进行的?

  • 12

我正在使用psycopg2数据库 API 对 PostgreSQL 9.3 做一些工作。

我将 DB API 设置为最低隔离级别(“自动提交”模式),并直接通过 SQL 管理我自己的事务。例子:

cur = self.conn.cursor()
cur.execute("BEGIN;")
cur.execute("SELECT dbId, downloadPath, fileName, tags FROM {tableName} WHERE dlState=%s".format(tableName=self.tableName), (2, ))
ret = cur.fetchall()
cur.execute("COMMIT;")

基本上,由仅限于该游标启动的事务,cur.execute("BEGIN;")还是针对整个连接(self.conn.cursor())?

我正在做的一些更复杂的事情涉及多个单独的数据库操作,我在逻辑上将其分解为函数。由于这一切都在一个将连接作为成员的类中,因此在每个函数中创建游标要方便得多。但是,我不确定在事务中创建游标是如何工作的。

基本上,如果事务是每个连接的,我可以在事务中即时创建大量游标。如果它们是每个光标的,那意味着我必须将光标传递到任何地方。它是哪一个?

文档没有涉及到这一点,尽管您可以调用这一事实connection.commit()使我相当确信事务控制是每个连接的。

postgresql
  • 2 个回答
  • 10290 Views
Martin Hope
Fake Name
Asked: 2014-08-24 22:44:27 +0800 CST

GROUP BY 一列,同时在 PostgreSQL 中按另一列排序

  • 11

我怎样才能一列,而只GROUP BY按另一列排序。

我正在尝试执行以下操作:

SELECT dbId,retreivalTime 
    FROM FileItems 
    WHERE sourceSite='something' 
    GROUP BY seriesName 
    ORDER BY retreivalTime DESC 
    LIMIT 100 
    OFFSET 0;

我想从 FileItems 中选择最后一个 /n/ 项,按降序排列,行由. 上面的查询出错了。我需要该值才能获取此查询的输出,并将其放在源表上以获取我所在的其余列。DISTINCTseriesNameERROR: column "fileitems.dbid" must appear in the GROUP BY clause or be used in an aggregate functiondbidJOIN

请注意,这基本上是以下问题的格式塔,为清楚起见,删除了许多无关的细节。


原始问题

我有一个从 sqlite3 迁移到 PostgreSQL 的系统,因为我在很大程度上已经超过了 sqlite:

    SELECT
            d.dbId,
            d.dlState,
            d.sourceSite,
        [snip a bunch of rows]
            d.note

    FROM FileItems AS d
        JOIN
            ( SELECT dbId
                FROM FileItems
                WHERE sourceSite='{something}'
                GROUP BY seriesName
                ORDER BY MAX(retreivalTime) DESC
                LIMIT 100
                OFFSET 0
            ) AS di
            ON  di.dbId = d.dbId
    ORDER BY d.retreivalTime DESC;

基本上,我想选择数据库中的最后n 个DISTINCT项目,其中不同的约束位于一列上,排序顺序位于另一列上。

不幸的是,上面的查询,虽然它在 sqlite 中运行良好,但在 PostgreSQL 中出现错误并带有 error psycopg2.ProgrammingError: column "fileitems.dbid" must appear in the GROUP BY clause or be used in an aggregate function。

不幸的是,虽然添加dbId到 GROUP BY 子句可以解决问题(例如GROUP BY seriesName,dbId),但这意味着查询结果的不同过滤不再起作用,因为dbid是数据库主键,因此所有值都是不同的。

通过阅读Postgres 文档,有SELECT DISTINCT ON ({nnn}),但这要求返回的结果按{nnn}.

因此,要通过 执行我想要的操作SELECT DISTINCT ON,我必须查询 allDISTINCT {nnn}和它们,然后按then再次MAX(retreivalTime)排序,然后取最大的 100 并使用这些对表进行查询以获取其余行,我'想避免,因为数据库在列中有 ~175K 行和 ~14K 不同的值,我只想要最新的 100 个,而且这个查询对性能有些关键(我需要查询时间 < 1/2 秒)。retreivalTime{nnn}seriesName

我在这里的天真假设基本上是数据库需要以 的降序遍历每一行retreivalTime,并且一旦看到LIMIT项目就停止,所以全表查询是不理想的,但我不会假装真正了解数据库系统内部优化,我可能完全错误地接近这个。

FWIW,我偶尔会使用不同的OFFSET值,但是对于偏移 > ~500 的情况,查询时间很长是完全可以接受的。基本上,OFFSET这是一个糟糕的分页机制,让我无需将滚动光标专用于每个连接就可以逃脱,我可能会在某个时候重新审视它。


参考问题我在一个月前提出的导致此查询的问题。


好的,更多注释:

    SELECT
            d.dbId,
            d.dlState,
            d.sourceSite,
        [snip a bunch of rows]
            d.note

    FROM FileItems AS d
        JOIN
            ( SELECT seriesName, MAX(retreivalTime) AS max_retreivalTime
                FROM FileItems
                WHERE sourceSite='{something}'
                GROUP BY seriesName
                ORDER BY max_retreivalTime DESC
                LIMIT %s
                OFFSET %s
            ) AS di
            ON  di.seriesName = d.seriesName AND di.max_retreivalTime = d.retreivalTime
    ORDER BY d.retreivalTime DESC;

如所描述的那样对查询正常工作,但如果我删除该GROUP BY子句,它会失败(它在我的应用程序中是可选的)。

psycopg2.ProgrammingError: column "FileItems.seriesname" must appear in the GROUP BY clause or be used in an aggregate function

我想我根本不了解子查询在 PostgreSQL 中是如何工作的。我哪里错了?我的印象是子查询基本上只是一个内联函数,结果只是输入到主查询中。

postgresql performance
  • 3 个回答
  • 39668 Views
Martin Hope
Fake Name
Asked: 2014-07-25 22:25:29 +0800 CST

提高 sqlite3 中的 GROUP BY 查询性能

  • 7

我有一个使用 sqlite3 作为数据库的小型 Web 应用程序(数据库相当小)。

现在,我正在使用以下查询生成一些要显示的内容:

SELECT dbId,
        dlState,
        retreivalTime,
        seriesName,
        <snip irrelevant columns>
        FROM DataItems
        GROUP BY seriesName
        ORDER BY retreivalTime DESC
        LIMIT ?
        OFFSET ?;

其中limit通常为 ~200,并且offset为 0(它们驱动分页机制)。

无论如何,现在,这个查询完全扼杀了我的表现。在大约 67K 行的表上执行大约需要 800 毫秒。

seriesName我在和上都有索引retreivalTime。

sqlite> SELECT name FROM sqlite_master WHERE type='index' ORDER BY name;
<snip irrelevant indexes>
DataItems_seriesName_index
DataItems_time_index           // This is the index on retreivalTime. Yeah, it's poorly named

但是,EXPLAIN QUERY PLAN似乎表明它们没有被使用:

sqlite> EXPLAIN QUERY PLAN SELECT dbId, 
                                  dlState, 
                                  retreivalTime, 
                                  seriesName 
                                  FROM 
                                      DataItems 
                                  GROUP BY 
                                      seriesName 
                                  ORDER BY 
                                      retreivalTime 
                                  DESC LIMIT 200 OFFSET 0;
0|0|0|SCAN TABLE DataItems
0|0|0|USE TEMP B-TREE FOR GROUP BY
0|0|0|USE TEMP B-TREE FOR ORDER BY

上的索引seriesName是COLLATE NOCASE,如果相关的话。

如果我放弃GROUP BY,它将按预期运行:

sqlite> EXPLAIN QUERY PLAN SELECT dbId, dlState, retreivalTime, seriesName FROM DataItems ORDER BY retreivalTime DESC LIMIT 200 OFFSET 0;
0|0|0|SCAN TABLE DataItems USING INDEX DataItems_time_index

基本上,我的天真假设是执行此查询的最佳方法是从 中的最新值向后走retreivalTime,每次seriesName看到新值时,将其附加到临时列表,最后返回该值。对于较大的情况,这会导致性能稍差OFFSET,但在此应用程序中这种情况很少发生。

如何优化此查询?如果需要,我可以提供原始查询操作。

插入性能在这里并不重要,所以如果我需要创建一个或两个额外的索引,那很好。


我目前的想法是一个提交挂钩,它更新了一个单独的表,该表仅用于跟踪唯一项目,但这似乎有点矫枉过正。

performance optimization
  • 2 个回答
  • 9486 Views
Martin Hope
Fake Name
Asked: 2014-07-24 00:55:02 +0800 CST

postgres 中的快速汉明距离查询

  • 20

我有一个包含图像感知散列的大型数据库(16M 行)。

我希望能够在合理的时间范围内通过汉明距离搜索行。

目前,据我正确理解这个问题,我认为这里最好的选择是实现BK-Tree的自定义 SP-GiST 实现,但这似乎需要做很多工作,而且我对实际操作仍然很模糊正确实施自定义索引的详细信息。计算汉明距离很容易处理,不过我确实知道 C。

基本上,这里的适当方法是什么?我需要能够在哈希的某个编辑距离内查询匹配项。据我了解,具有相等长度的字符串的 Levenshtein 距离在功能上是汉明距离,因此至少有一些现有的支持我想要的东西,尽管没有明确的方法可以从中创建索引(请记住,我正在查询的值变化。我无法预先计算与固定值的距离,因为这只对那个值有用)。

哈希当前存储为包含哈希的二进制 ASCII 编码的 64 字符字符串(例如“10010101...”),但我可以很容易地将它们转换为 int64。真正的问题是我需要能够相对快速地查询。

似乎可以通过 实现我想要的东西pg_trgm,但我有点不清楚三元组匹配机制是如何工作的(特别是,它返回的相似性度量实际上代表什么?看起来有点像编辑距离)。

插入性能并不重要(计算每一行的哈希值非常昂贵),所以我主要关心搜索。

postgresql index
  • 2 个回答
  • 7908 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目

    • 12 个回答
  • Marko Smith

    如何让sqlplus的输出出现在一行中?

    • 3 个回答
  • Marko Smith

    选择具有最大日期或最晚日期的日期

    • 3 个回答
  • Marko Smith

    如何列出 PostgreSQL 中的所有模式?

    • 4 个回答
  • Marko Smith

    列出指定表的所有列

    • 5 个回答
  • Marko Smith

    如何在不修改我自己的 tnsnames.ora 的情况下使用 sqlplus 连接到位于另一台主机上的 Oracle 数据库

    • 4 个回答
  • Marko Smith

    你如何mysqldump特定的表?

    • 4 个回答
  • Marko Smith

    使用 psql 列出数据库权限

    • 10 个回答
  • Marko Smith

    如何从 PostgreSQL 中的选择查询中将值插入表中?

    • 4 个回答
  • Marko Smith

    如何使用 psql 列出所有数据库和表?

    • 7 个回答
  • Martin Hope
    Jin 连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目 2014-12-02 02:54:58 +0800 CST
  • Martin Hope
    Stéphane 如何列出 PostgreSQL 中的所有模式? 2013-04-16 11:19:16 +0800 CST
  • Martin Hope
    Mike Walsh 为什么事务日志不断增长或空间不足? 2012-12-05 18:11:22 +0800 CST
  • Martin Hope
    Stephane Rolland 列出指定表的所有列 2012-08-14 04:44:44 +0800 CST
  • Martin Hope
    haxney MySQL 能否合理地对数十亿行执行查询? 2012-07-03 11:36:13 +0800 CST
  • Martin Hope
    qazwsx 如何监控大型 .sql 文件的导入进度? 2012-05-03 08:54:41 +0800 CST
  • Martin Hope
    markdorison 你如何mysqldump特定的表? 2011-12-17 12:39:37 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 对 SQL 查询进行计时? 2011-06-04 02:22:54 +0800 CST
  • Martin Hope
    Jonas 如何从 PostgreSQL 中的选择查询中将值插入表中? 2011-05-28 00:33:05 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 列出所有数据库和表? 2011-02-18 00:45:49 +0800 CST

热门标签

sql-server mysql postgresql sql-server-2014 sql-server-2016 oracle sql-server-2008 database-design query-performance sql-server-2017

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve