AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / dba / 问题

问题[optimization](dba)

Martin Hope
Nelson Nokimi
Asked: 2024-02-05 16:37:50 +0800 CST

尽管 innodb_buffer_pool_size 值较高,但 mariadbd 进程的 CPU 利用率仍超过 140%

  • 6
这个问题是从 Stack Overflow 迁移来的,因为它可以在 Database Administrators Stack Exchange 上找到答案。12天前迁移 。

我有一台具有 120GB RAM 和 mariadb 10.6.16 的 Plesk Ubuntu 22.04 服务器,我用它来运行单个网站的数据库。仅数据库就有大约 8GB,但尽管我进行了所有优化,mariadb 仍然以超过 140% 的 CPU 运行。这会导致网站响应时间变慢。

我已经进行了相当多的 Google 搜索并优化了 PHP FPM 和 MySQL,但尽管如此,网站的速度仍然很慢,我将矛头指向 mariadb,它的进程占用了处理器的 140% 以上。我将在下面留下屏幕截图。

mariadb进程

PHP FPM 配置

我设置:

pm.max_children = 90
pm.max_requests = 200
pm = static
memory_limit = 10240M
max_execution_time = 300
max_input_time = 600
post_max_size = 1024M
upload_max_filesize = 512M

MariaDB配置

在/etc/mysql/my.cnf文件中,我设置了这些值

[mysqld] sql_mode=ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
bind-address = 127.0.0.1
local-infile=0
loose-local-infile=1
innodb_buffer_pool_size = 96G
innodb_log_file_size = 12G
innodb_log_buffer_size = 64M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
query_cache_type = 1
query_cache_size = 256M
query_cache_limit = 512M
query_cache_min_res_unit = 4k
innodb_lock_wait_timeout = 6000

这也很奇怪,但在下面的内存截图中,缓存的部分(米色)太小了。 内存截图

服务器本身在 64Gb Ubuntu 20.04 服务器上运行良好,没有出现问题。但自从这次迁移以来,速度很慢。即使我们在 Ubuntu 22.04 服务器中将 RAM 从 64Gb 更改为 120Go。

有人已经遇到过这种问题吗?

在此先感谢您的帮助。

问候


以下是基于评论的附加信息: 服务器 CPU 是 24 个 AMD EPYC 7282 16 核处理器 硬盘驱动器是 NVMe

顶部命令

从 information_schema.tables 中选择 COUNT(*);

在此输入图像描述

MariaDB [(none)]> SHOW ENGINE INNODB STATUS;

| InnoDB |      |
=====================================
2024-02-06 09:18:33 0x7fd0721fe640 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 55 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 0 srv_active, 0 srv_shutdown, 386065 srv_idle
srv_master_thread log flush and writes: 386059
----------
SEMAPHORES
----------
------------
TRANSACTIONS
------------
Trx id counter 5295351
Purge done for trx's n:o < 5295349 undo n:o < 0 state: running
History list length 1
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION (0x7fe8c01d0880), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01cdc80), not started
mysql tables in use 1, locked 0
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01cb080), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c8480), not started
mysql tables in use 6, locked 0
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c4d80), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01cbb80), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c4280), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01d1380), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01cd180), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01ce780), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01cc680), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c3780), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c0b80), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01d1e80), not started
mysql tables in use 2, locked 0
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c9a80), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01cfd80), not started
mysql tables in use 1, locked 0
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01cf280), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01ca580), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c7980), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c2c80), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c6e80), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c2180), not started
mysql tables in use 5, locked 0
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c8f80), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c6380), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c5880), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
---TRANSACTION (0x7fe8c01c1680), not started
0 lock struct(s), heap size 1128, 0 row lock(s)
--------
FILE I/O
--------
Pending flushes (fsync) log: 0; buffer pool: 0
406980 OS file reads, 1500489 OS file writes, 309011 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 1.07 writes/s, 0.56 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 0, seg size 2, 0 merges
merged operations:
 insert 0, delete mark 0, delete 0
discarded operations:
 insert 0, delete mark 0, delete 0
0.00 hash searches/s, 0.00 non-hash searches/s
---
LOG
---
Log sequence number 10381178784
Log flushed up to   10381178282
Pages flushed up to 9758411800
Last checkpoint at  9758411788
0 pending log flushes, 0 pending chkp writes
1500490 log i/o's done, 1.07 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 103213432832
Dictionary memory allocated 613451784
Buffer pool size   6230016
Free buffers       5820886
Database pages     409130
Old database pages 151006
Modified db pages  21938
Percent of dirty pages(LRU & free pages): 0.352
Max dirty pages percent: 90.000
Pending reads 0
Pending writes: LRU 0, flush list 0
Pages made young 1300160, not young 5227092
0.04 youngs/s, 0.00 non-youngs/s
Pages read 406341, created 12733, written 0
0.00 reads/s, 0.02 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 409130, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 read views open inside InnoDB
Process ID=0, Main thread ID=0, state: sleeping
Number of rows inserted 774791, updated 948664, deleted 367, read 777328684815
0.85 inserts/s, 0.00 updates/s, 0.00 deletes/s, 1854176.96 reads/s
Number of system rows inserted 0, updated 0, deleted 0, read 0
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================

根据 Stackexchange 固定的限制,其他命令的结果太长,无法插入到描述中。

其他信息:

显示全球状态; https://justpaste.it/4iip0

显示全局变量; https://justpaste.it/d9k81

地位; https://justpaste.it/fk95g

ulimit -a https://justpaste.it/bc50r

optimization
  • 3 个回答
  • 95 Views
Martin Hope
Lev M.
Asked: 2022-09-18 04:25:14 +0800 CST

MySQL 将键添加到正在使用的表中

  • 0

我在 CentOS 7 上运行 MySQL 服务器 v5.7.18。

其中一张表用作 Web API 的操作日志,并且已增长到 2000 万行。

用户从运行如下查询的前端应用程序搜索此表:

SELECT * FROM log_table WHERE some_long_id = '12345abdcef';

该some_long_id列是文本类型,并且包含 UUID 样式的字母数字 ID。
目前,此查询大约需要 20 秒才能完成。

我发现如果我在此列上添加一个键,查询几乎会立即运行:
ALTER TABLE log_table ADD KEY (some_long_id(64));

此操作需要很长时间,但可以完成工作。
我在离线虚拟数据库上对其进行了测试。

我的问题: 我可以在ADD KEY不关闭 Web 应用程序的情况下安全地在生产数据库上运行操作吗?

有一个 API 可以不断地将新行插入到这个表中,可能每秒几次。

这会影响索引吗?它会破坏什么吗?索引表时插入会失败吗?

编辑:忘了提到它正在使用 innoDB 引擎。

PS我不是DBA,我只需要管理这个东西,所以我的整个方法可能是错误的。
也欢迎“改为这样做”的答案。

此外,谷歌搜索没有给出关于这个问题的相关结果,如果它是重复的,对不起。

index optimization
  • 1 个回答
  • 17 Views
Martin Hope
lifeisajourney
Asked: 2022-08-20 11:36:05 +0800 CST

测试时使用缓存数据执行存储过程不佳

  • 4

我有一个存储过程,第一次运行大约需要 15 秒,后续运行需要 1 到 2 秒。如果我等待一个小时并再次运行它,那么它又需要 15 秒。

我猜它在随后的运行中使用缓冲池中的缓存数据,而第一次它必须将数据从磁盘加载到缓冲池。我正在尝试调整此存储过程,但在第一次运行后我无法测试我的更改,因为它只需要 1 到 2 秒。

我知道我可以使用该DBCC DROPCLEANBUFFERS命令来释放缓存并运行我的存储过程,但是我不允许在工作中清除缓存。我也试过WITH RECOMPILE了,但这只会创建一个新计划,但仍然使用缓存的数据。是否有另一种强制存储过程不使用缓存数据的方法?

sql-server optimization
  • 2 个回答
  • 191 Views
Martin Hope
Moayad .AlMoghrabi
Asked: 2022-08-16 03:55:42 +0800 CST

优化 500 万条记录的 SQL 查询(使用与否?!)

  • -1

我有一张products包含近500 万条记录的表。

我有一个产品类别的专栏(product_category)。它现在是类型INT(11),它是一个索引并且引用另一个表(categories表)类别表只包含类别的名称。

类别名称是静态的,从不更新或编辑。

在最快的查询中始终获得产品类别名称的最佳解决方案是什么?

  • 使用连接方法。
  • 使用子查询方法。
  • 将类别名称作为 a存储string在 products 表中。

如果有任何其他建议会很好,那么上述选项的最佳最佳解决方案是什么?

optimization query-performance
  • 1 个回答
  • 98 Views
Martin Hope
Lucas03
Asked: 2022-06-10 01:15:52 +0800 CST

优化 PostgreSQL 上 SQL 查询的过滤器

  • 0

我有一个过滤结果成本很高的查询,我想我应该添加一个索引来优化计划,但我到目前为止尝试的索引没有任何影响。我可以通过为过滤列添加复合索引来优化查询吗?这是计划:

Limit  (cost=3069.33..14926.59 rows=4 width=509) (actual time=258424.190..258424.197 rows=4 loops=1)
  InitPlan 1 (returns $0)
    ->  HashAggregate  (cost=82.19..82.99 rows=80 width=8) (actual time=1320.215..1320.535 rows=2045 loops=1)
          Group Key: booking_passengers.bid
          Batches: 1  Memory Usage: 257kB
          ->  Index Scan using idx_booking_passengers_user_id on booking_passengers  (cost=0.44..81.99 rows=80 width=8) (actual time=10.687..1314.519 rows=2045 loops=1)
                Index Cond: ((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text)
  ->  Incremental Sort  (cost=2986.35..18414332.62 rows=6211 width=509) (actual time=258424.188..258424.189 rows=4 loops=1)
"        Sort Key: booking_data.last_segment_arrival_at DESC, booking_data.bid"
        Presorted Key: booking_data.last_segment_arrival_at
        Full-sort Groups: 1  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
        ->  Index Scan Backward using idx_booking_data_last_segment_arrival_at on booking_data  (cost=0.44..18414054.67 rows=6211 width=509) (actual time=48419.376..258424.093 rows=5 loops=1)
              Index Cond: (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone)
              Filter: ((is_deleted IS FALSE) AND (bid >= 1100000) AND (((confirmation_sent IS TRUE) AND ((final_status)::text <> 'refunded'::text)) OR ((final_status)::text = 'confirmed'::text) OR ((confirmation_sent IS FALSE) AND ((final_status)::text = 'closed'::text))) AND (((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text) OR (bid = ANY ($0))))
              Rows Removed by Filter: 2315888
Planning Time: 2.132 ms
Execution Time: 258424.387 ms

这是查询:

explain analyze
    SELECT *
      FROM booking_data
      WHERE booking_data.bid >= 1100000
        AND booking_data.is_deleted IS false
        AND booking_data.last_segment_arrival_at < '2022-06-13 13:36'
        AND (booking_data.user_id = 'NJ8QigsGcQCDOttoGsD3iS'
                 OR booking_data.bid = ANY (CAST(array((
                     SELECT DISTINCT booking_passengers.bid AS anon_2
                     FROM booking_passengers
                     WHERE booking_passengers.user_id = 'NJ8QigsGcQCDOttoGsD3iS')) AS BIGINT[]))
            )
        AND (booking_data.confirmation_sent IS true
                 AND booking_data.final_status != 'refunded'
                 OR booking_data.final_status = 'confirmed'
                 OR booking_data.confirmation_sent IS false
                        AND booking_data.final_status IN ('closed')
            )
      ORDER BY booking_data.last_segment_arrival_at DESC, booking_data.bid ASC
      LIMIT 4 OFFSET 0

booking_data 表上的当前索引:

create index idx_booking_data_final_status on booking_data (final_status);
create index idx_booking_data_user_id on booking_data (user_id);
create index idx_booking_data_last_segment_arrival_at on booking_data (last_segment_arrival_at);
create index idx_booking_data_first_segment_arrival_at on booking_data (first_segment_arrival_at);
create index idx_booking_data_confirmed_at on booking_data (confirmed_at);
create index idx_booking_data_booked_email on booking_data (booked, email);
create index idx_booking_data_first_last_segment_bid_user_id on booking_data (first_segment_arrival_at, last_segment_arrival_at, bid, user_id);

我添加了索引:

CREATE index CONCURRENTLY idx_booking_data_user_id_last_segment_arrival_at on booking_data (user_id, last_segment_arrival_at);

它现在有关于暂存数据库的计划(具有生产数据的较弱实例)。这是计划:

Limit  (cost=13432.55..13432.56 rows=4 width=509) (actual time=11958.229..11958.235 rows=4 loops=1)
  InitPlan 1 (returns $0)
    ->  HashAggregate  (cost=82.19..82.99 rows=80 width=8) (actual time=2741.877..2742.215 rows=2053 loops=1)
          Group Key: booking_passengers.bid
          Batches: 1  Memory Usage: 257kB
          ->  Index Scan using idx_booking_passengers_user_id on booking_passengers  (cost=0.44..81.99 rows=80 width=8) (actual time=18.064..2734.284 rows=2053 loops=1)
                Index Cond: ((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text)
  ->  Sort  (cost=13349.57..13365.09 rows=6210 width=509) (actual time=11958.227..11958.230 rows=4 loops=1)
"        Sort Key: booking_data.last_segment_arrival_at DESC, booking_data.bid"
        Sort Method: top-N heapsort  Memory: 28kB
        ->  Bitmap Heap Scan on booking_data  (cost=195.64..13256.42 rows=6210 width=509) (actual time=3771.506..11952.815 rows=854 loops=1)
              Recheck Cond: ((((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text) AND (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone)) OR ((bid = ANY ($0)) AND (bid >= 1100000)))
              Filter: ((is_deleted IS FALSE) AND (bid >= 1100000) AND (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone) AND (((confirmation_sent IS TRUE) AND ((final_status)::text <> 'refunded'::text)) OR ((final_status)::text = 'confirmed'::text) OR ((confirmation_sent IS FALSE) AND ((final_status)::text = 'closed'::text))))
              Rows Removed by Filter: 10202
              Heap Blocks: exact=10935
              ->  BitmapOr  (cost=195.64..195.64 rows=12634 width=0) (actual time=3718.959..3718.961 rows=0 loops=1)
                    ->  Bitmap Index Scan on idx_booking_data_user_id_last_segment_arrival_at  (cost=0.00..176.81 rows=12625 width=0) (actual time=17.294..17.294 rows=11025 loops=1)
                          Index Cond: (((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text) AND (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone))
                    ->  Bitmap Index Scan on booking_data_pkey  (cost=0.00..15.72 rows=10 width=0) (actual time=3701.663..3701.663 rows=2062 loops=1)
                          Index Cond: ((bid = ANY ($0)) AND (bid >= 1100000))
Planning Time: 2.263 ms
Execution Time: 11958.434 ms

首次运行查询后,执行时间更快:

Limit  (cost=13432.55..13432.56 rows=4 width=509) (actual time=29.641..29.647 rows=4 loops=1)
  InitPlan 1 (returns $0)
    ->  HashAggregate  (cost=82.19..82.99 rows=80 width=8) (actual time=2.507..2.761 rows=2053 loops=1)
          Group Key: booking_passengers.bid
          Batches: 1  Memory Usage: 257kB
          ->  Index Scan using idx_booking_passengers_user_id on booking_passengers  (cost=0.44..81.99 rows=80 width=8) (actual time=0.021..1.664 rows=2053 loops=1)
                Index Cond: ((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text)
  ->  Sort  (cost=13349.57..13365.09 rows=6210 width=509) (actual time=29.640..29.643 rows=4 loops=1)
"        Sort Key: booking_data.last_segment_arrival_at DESC, booking_data.bid"
        Sort Method: top-N heapsort  Memory: 28kB
        ->  Bitmap Heap Scan on booking_data  (cost=195.64..13256.42 rows=6210 width=509) (actual time=11.942..28.832 rows=854 loops=1)
              Recheck Cond: ((((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text) AND (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone)) OR ((bid = ANY ($0)) AND (bid >= 1100000)))
              Filter: ((is_deleted IS FALSE) AND (bid >= 1100000) AND (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone) AND (((confirmation_sent IS TRUE) AND ((final_status)::text <> 'refunded'::text)) OR ((final_status)::text = 'confirmed'::text) OR ((confirmation_sent IS FALSE) AND ((final_status)::text = 'closed'::text))))
              Rows Removed by Filter: 10202
              Heap Blocks: exact=10935
              ->  BitmapOr  (cost=195.64..195.64 rows=12634 width=0) (actual time=10.139..10.140 rows=0 loops=1)
                    ->  Bitmap Index Scan on idx_booking_data_user_id_last_segment_arrival_at  (cost=0.00..176.81 rows=12625 width=0) (actual time=2.024..2.024 rows=11025 loops=1)
                          Index Cond: (((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text) AND (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone))
                    ->  Bitmap Index Scan on booking_data_pkey  (cost=0.00..15.72 rows=10 width=0) (actual time=8.113..8.113 rows=2062 loops=1)
                          Index Cond: ((bid = ANY ($0)) AND (bid >= 1100000))
Planning Time: 0.404 ms
Execution Time: 29.765 ms

在生产实例上,所有查询运行都很慢,即使它是更强大的实例(idx_booking_data_user_id_last_segment_arrival_at不使用索引):

Limit  (cost=523.03..2268.86 rows=4 width=509) (actual time=28549.479..28549.482 rows=4 loops=1)
  InitPlan 1 (returns $0)
    ->  HashAggregate  (cost=82.19..82.99 rows=80 width=8) (actual time=155.070..155.307 rows=2053 loops=1)
          Group Key: booking_passengers.bid
          Batches: 1  Memory Usage: 257kB
          ->  Index Scan using idx_booking_passengers_user_id on booking_passengers  (cost=0.44..81.99 rows=80 width=8) (actual time=0.414..153.733 rows=2053 loops=1)
                Index Cond: ((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text)
  ->  Incremental Sort  (cost=440.05..2710839.81 rows=6210 width=509) (actual time=28549.478..28549.479 rows=4 loops=1)
"        Sort Key: booking_data.last_segment_arrival_at DESC, booking_data.bid"
        Presorted Key: booking_data.last_segment_arrival_at
        Full-sort Groups: 1  Sort Method: quicksort  Average Memory: 27kB  Peak Memory: 27kB
        ->  Index Scan Backward using idx_booking_data_last_segment_arrival_at on booking_data  (cost=0.44..2710561.90 rows=6210 width=509) (actual time=2034.195..28549.417 rows=5 loops=1)
              Index Cond: (last_segment_arrival_at < '2022-06-13 13:36:00+00'::timestamp with time zone)
              Filter: ((is_deleted IS FALSE) AND (bid >= 1100000) AND (((confirmation_sent IS TRUE) AND ((final_status)::text <> 'refunded'::text)) OR ((final_status)::text = 'confirmed'::text) OR ((confirmation_sent IS FALSE) AND ((final_status)::text = 'closed'::text))) AND (((user_id)::text = 'NJ8QigsGcQCDOttoGsD3iS'::text) OR (bid = ANY ($0))))
              Rows Removed by Filter: 2323153
Planning Time: 1.845 ms
Execution Time: 28549.694 ms

这是关于表格分析的答案吗?

SELECT schemaname, relname, last_analyze FROM pg_stat_all_tables WHERE relname = 'booking_passengers';

所以在两个相关表上运行 ANALYZE :

ANALYZE VERBOSE public.booking_data;
ANALYZE VERBOSE public.booking_passengers;

生产索引仍未使用:(

你的 WHERE 有 5 个 ANDed together 块。在没有 LIMIT 的情况下,每个单独返回多少行?

select count(*) FROM booking_data WHERE bid >= 1100000- 28208008
select count(*) FROM booking_data WHERE is_deleted IS false- 29249188
select count(*) FROM booking_data WHERE last_segment_arrival_at < '2022-06-13 13:36'- 23594003

select count(*)
FROM booking_data
WHERE (booking_data.user_id = 'NJ8QigsGcQCDOttoGsD3iS'
    OR booking_data.bid = ANY (CAST(array((
        SELECT DISTINCT booking_passengers.bid AS anon_2
        FROM booking_passengers
        WHERE booking_passengers.user_id = 'NJ8QigsGcQCDOttoGsD3iS')) AS BIGINT[]))
          )

11079

select count(*)
FROM booking_data
WHERE (booking_data.confirmation_sent IS true
                 AND booking_data.final_status != 'refunded'
                 OR booking_data.final_status = 'confirmed'
                 OR booking_data.confirmation_sent IS false
                        AND booking_data.final_status IN ('closed')
            )

17294003

我按照建议使用更高的 statistics_target 运行 ANALYZE:

show default_statistics_target ;
set default_statistics_target to 1000;
ANALYZE VERBOSE public.booking_data;
ANALYZE VERBOSE public.booking_passengers;

但仍然没有使用 user_id 和 last_segment_arrival_at 的索引:(

postgresql optimization
  • 1 个回答
  • 282 Views
Martin Hope
Hamza Hamdani
Asked: 2022-05-11 06:09:48 +0800 CST

每天被截断和收费的表的索引优化

  • 0

我是最近招聘的数据工程师,我已经制定了一堆计划每天运行的 ETL 管道,目标表被截断并再次收费。DBMS 是本地 SQL Server。

当我到达时,我发现其他 ETL 也以每天的方式运行,但是当我检查索引上的碎片百分比时,比率太高了,所以我正在考虑为所有索引创建一个优化任务。

大多数聚集索引(主键)都做得很好,但非聚集索引存在高碎片,我应该如何以及何时重建/重组索引?在加载新数据之后还是之前?

sql-server optimization
  • 1 个回答
  • 86 Views
Martin Hope
Ahmad Zahratulaev
Asked: 2022-05-11 00:08:24 +0800 CST

Autovacuum 不清除数据库

  • 0

Autovacuum 不会清除数据库。数据库中间数据库 Postgres 10.18,AWS RDS(vCPU 2,RAM 8Gb,SSD(gp2)1100Gib)

表“spree_datafeed_products”

relid               | 16556
schemaname          | public
relname             | spree_datafeed_products
seq_scan            | 20
seq_tup_read        | 365522436
idx_scan            | 962072108
idx_tup_fetch       | 9929276855
n_tup_ins           | 2846455
n_tup_upd           | 35778058
n_tup_del           | 284291955
n_tup_hot_upd       | 0
n_live_tup          | 3546840
n_dead_tup          | 338790851
n_mod_since_analyze | 307930753
last_vacuum         | 
last_autovacuum     | 
last_analyze        | 
last_autoanalyze    | 2022-04-29 13:01:43.985749+00
vacuum_count        | 0
autovacuum_count    | 0
analyze_count       | 0
autoanalyze_count   | 1

表和索引大小:

indexname                           | size  
index_spree_datafeed_products_on_updated_at                  | 48 GB
index_spree_datafeed_products_on_state                       | 35 GB
index_spree_datafeed_products_on_size_variant_field          | 40 GB
index_spree_datafeed_products_on_product_id                  | 32 GB
index_spree_datafeed_products_on_original_id                 | 31 GB
index_spree_datafeed_products_on_datafeed_id                 | 42 GB
index_spree_datafeed_products_on_datafeed_id_and_original_id | 31 GB
index_spree_datafeed_products_on_data_hash                   | 39 GB
spree_datafeed_products_pkey                                 | 18 GB

 pg_size_pretty  - 419 GB

工人:

datid            | 16404
datname          | milanstyle_production
pid              | 2274
backend_start    | 2022-05-01 19:52:00.066097+00
xact_start       | 2022-05-01 19:52:00.23692+00
query_start      | 2022-05-01 19:52:00.23692+00
state_change     | 2022-05-01 19:52:00.236921+00
wait_event_type  | 
wait_event       | 
state            | active
backend_xid      | 
backend_xmin     | 1301636863
query            | autovacuum: VACUUM ANALYZE public.spree_datafeed_products
backend_type     | autovacuum worker

设置:

autovacuum on
autovacuum_analyze_scale_factor 0.05
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 30
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.1
autovacuum_vacuum_threshold 50

垃圾清理脚本积累了大量的删除条目。我们已经等待了一个多星期(自动清关)。问题是什么?为什么数据库失败?

postgresql optimization
  • 1 个回答
  • 46 Views
Martin Hope
Ross Bush
Asked: 2022-04-28 10:29:02 +0800 CST

UDF 和内置函数行为

  • 0

进行以下查询:

DECLARE @X VARCHAR(200) = '1,2,3,4'

SELECT
    *,
    dbo.aUserDefinedScalarFunction(4) AS ScalarValue
FROM
   MyTable T
   INNER JOIN dbo.aUserDefineTableFunction(@X) A ON T.SomeID=A.SomeID
WHERE 
  (T.ID1 IS NULL OR T.ID1 IN (SELECT [value] FROM STRING_SPLIT(@X,',')))
  AND
  (T.ID2 IS NULL OR T.ID2 IN (SELECT Value FROM dbo.MySplitterFunction(@X))

我通常为上述条件创建索引#tempTables WHERE,我发现它在大型数据集上表现更好。但是,我仍然无法找到以下问题的明确答案:

  1. 查询分析器会将 aUserDefinedScalarFunction(4) 优化为 ScalarValue 还是针对每条记录进行评估?

  2. INNER JOIN dbo.aUserDefineTableFunction(@X) 会在临时表中实现一次,还是会为每条记录执行?该函数返回表(不是表变量)。

  3. SELECT [value] FROM STRING_SPLIT(@X,',') 的结果是否得到优化,还是针对每次比较进行评估?

  4. SELECT Value FROM dbo.MySplitterFunction(@X) 的结果是否得到优化或在每次比较期间进行评估?

sql-server optimization
  • 2 个回答
  • 62 Views
Martin Hope
joemac12
Asked: 2021-11-05 12:54:35 +0800 CST

优化器如何执行标量函数

  • 1

我在网上读到标量函数会影响性能,因为优化器无法访问标量函数的内容。由于函数是针对每一行执行的,优化器是否必须每次都为函数的内容构建一个执行计划,或者它是否在第一次访问该函数时构建该计划,然后将其用于所有其他行?

sql-server optimization
  • 2 个回答
  • 145 Views
Martin Hope
atos
Asked: 2021-10-10 08:19:13 +0800 CST

在 Cassandra 中过滤大量行

  • 2

假设我们在一个表中有很多可能很重的行(例如 500k),我们希望通过主键过滤并通过 Internet 发送到处理引擎。使用该 IN子句是否合理?

optimization query-performance
  • 1 个回答
  • 198 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目

    • 12 个回答
  • Marko Smith

    如何让sqlplus的输出出现在一行中?

    • 3 个回答
  • Marko Smith

    选择具有最大日期或最晚日期的日期

    • 3 个回答
  • Marko Smith

    如何列出 PostgreSQL 中的所有模式?

    • 4 个回答
  • Marko Smith

    列出指定表的所有列

    • 5 个回答
  • Marko Smith

    如何在不修改我自己的 tnsnames.ora 的情况下使用 sqlplus 连接到位于另一台主机上的 Oracle 数据库

    • 4 个回答
  • Marko Smith

    你如何mysqldump特定的表?

    • 4 个回答
  • Marko Smith

    使用 psql 列出数据库权限

    • 10 个回答
  • Marko Smith

    如何从 PostgreSQL 中的选择查询中将值插入表中?

    • 4 个回答
  • Marko Smith

    如何使用 psql 列出所有数据库和表?

    • 7 个回答
  • Martin Hope
    Jin 连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目 2014-12-02 02:54:58 +0800 CST
  • Martin Hope
    Stéphane 如何列出 PostgreSQL 中的所有模式? 2013-04-16 11:19:16 +0800 CST
  • Martin Hope
    Mike Walsh 为什么事务日志不断增长或空间不足? 2012-12-05 18:11:22 +0800 CST
  • Martin Hope
    Stephane Rolland 列出指定表的所有列 2012-08-14 04:44:44 +0800 CST
  • Martin Hope
    haxney MySQL 能否合理地对数十亿行执行查询? 2012-07-03 11:36:13 +0800 CST
  • Martin Hope
    qazwsx 如何监控大型 .sql 文件的导入进度? 2012-05-03 08:54:41 +0800 CST
  • Martin Hope
    markdorison 你如何mysqldump特定的表? 2011-12-17 12:39:37 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 对 SQL 查询进行计时? 2011-06-04 02:22:54 +0800 CST
  • Martin Hope
    Jonas 如何从 PostgreSQL 中的选择查询中将值插入表中? 2011-05-28 00:33:05 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 列出所有数据库和表? 2011-02-18 00:45:49 +0800 CST

热门标签

sql-server mysql postgresql sql-server-2014 sql-server-2016 oracle sql-server-2008 database-design query-performance sql-server-2017

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve