AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / dba / 问题

问题[performance-tuning](dba)

Martin Hope
sarath sanil
Asked: 2022-07-28 21:03:35 +0800 CST

MSSQL 存储过程执行计划中未显示并行插入

  • 0

在此处输入图像描述

正如您在上面的屏幕截图中看到的,此查询提供了并行插入:

INSERT INTO #StateAllocationData WITH (TABLOCK)
  (ProjectID,StateId,StateLineDescriptionId,PartnerID,Value)  
SELECT @ProjectID as ProjectID,sld.StateId,sld.ID,TaxReturnPartnerNumber,0 as Value  
FROM Meta.States S(NOLOCK)  
   LEFT JOIN Meta.StateAllocationLineDescriptions SLD(NOLOCK) ON S.StateId = SLD.StateId  
   join Ottp.PartnerData PD on ProjectID=@Projectid  
WHERE  SLD.isDeleted = 0 AND SLD.ID IS NOT NULL  

在此处输入图像描述

但是,正如您在上面的屏幕截图中看到的那样,此查询不执行并行插入,我想知道为什么:

INSERT INTO  #SAmt WITH (TABLOCK) (ProjectID,StateId,K1SummaryID,StateLineDescriptionId)  
SELECT @ProjectID AS 'ProjectID',  
     S.StateId,  
     SLD.StateLineDescriptionId AS 'K1SummaryID',  
     SLD.ID AS 'StateLineDescriptionId'  
FROM Meta.States S(NOLOCK)  
    LEFT JOIN Meta.StateAllocationLineDescriptions SLD(NOLOCK) ON S.StateId = SLD.StateId  
WHERE --SLD.isK1Summary <> 0 and  
      SLD.isDeleted = 0  
     AND SLD.ID IS NOT NULL  
sql-server performance-tuning
  • 1 个回答
  • 39 Views
Martin Hope
Colin Coghill
Asked: 2022-03-14 13:54:46 +0800 CST

如何将 PostgreSQL temp_files 移动到 CURSOR 的不同表空间/磁盘

  • 0

在对我们的 PostgreSQL 数据库(Ubuntu Focal,PostgreSQL 13.3)进行性能改进时,我在一个快速的本地 NVME 驱动器上创建了一个“临时”表空间。这很好用,由大型查询创建的临时表和 temp_files 最终都在那里,从而减轻了主数据驱动器的大量负载。

但是,我们经常使用服务器端CURSORS,并且似乎temp_files由那些创建的总是最终在主数据卷上,从而导致比我们真正想要的更多的 I/O。

2022-03-13 00:59:51.692 UTC 1350170 xx@xx 日志:临时文件:路径“base/pgsql_tmp/pgsql_tmp1350170.54”,大小 564228392

2022-03-13 00:59:51.692 UTC 1350170 xx@xx 声明:从“xx”获取转发 5569

我已经做了尽可能多的work_mem调整,但是我们的许多查询都很大(temp_files 通常超过 1GB)。我们最大的瓶颈是数据驱动器上的 I/O。因此,如果我可以将这些 temp_files 放到单独的本地 NVME 驱动器上,那就太棒了。

我见过一些地方建议用符号链接替换pgsql_tmp​​dir from~/main/base/pgsql_tmp到另一个驱动器,但也警告说这不一定安全。

这是一种安全的方法吗,还是有另一种更好的方法?有没有办法告诉PostgreSQL把temp_files生成的CURSORS放到临时驱动器上?

谢谢,

postgresql performance-tuning
  • 1 个回答
  • 95 Views
Martin Hope
Skary
Asked: 2022-01-11 05:40:11 +0800 CST

SQL Server 表查询带分页性能调优,了解当前解决方案

  • 6

如标题中所述,我开始使用由使用 Linq To SQL 作为 ORM 的遗留程序生成的分页对表查询进行性能调整。

我发现强烈建议在分页前对表格进行排序的资源: https ://rimdev.io/optimizing-linq-sql-skip-take/

所以我遵循了提供的建议并尝试了巨大的差异。我很清楚,这与 row_number 的计算方式有些相关,但我不清楚究竟发生了什么以及为什么两个查询之间存在如此大的差异。

原始慢查询(约 7K 个元素的数据集,耗时约 3 秒):

SELECT [t7].[ID], [t7].[ID_BRAND], [t7].[CODE], [t7].[CODFOR], [t7].[COD_ALT01], [t7].[COD_ALT02], [t7].[COD_ALT03], [t7].[ID_UOM], [t7].[IS_ACTIVE], [t7].[_ATTRIBUTES] AS [_ATTRIBUTES], [t7].[_DOCUMENTS] AS [_DOCUMENTS], [t7].[_SEO] AS [_SEO], [t7].[_TRANSLATIONS] AS [_TRANSLATIONS], [t7].[_TAGS] AS [_TAGS], [t7].[_NOTES] AS [_NOTES], [t7].[_METADATA] AS [_METADATA], [t7].[IS_B2B], [t7].[IS_B2C], [t7].[IS_PROMO], [t7].[IS_NEWS], [t7].[CAN_BE_RETURNED], [t7].[IS_SHIPPABLE], [t7].[HAS_SHIPPING_COSTS], [t7].[IS_PURCHEASABLE], [t7].[test], [t7].[ID2], [t7].[CODE2], [t7].[BUSINESS_NAME], [t7].[NAME], [t7].[PHONE_01], [t7].[PHONE_02], [t7].[PHONE_03], [t7].[FAX_01], [t7].[FAX_02], [t7].[COUNTRY_01], [t7].[CITY_01], [t7].[ADDRESS_01], [t7].[COUNTRY_02], [t7].[CITY_02], [t7].[ADDRESS_02], [t7].[EMAIL_01], [t7].[EMAIL_02], [t7].[PEC], [t7].[SITE_01], [t7].[SITE_02], [t7].[SITE_03], [t7].[SITE_04], [t7].[VAT_NUMBER], [t7].[SORT], [t7].[GROUPID_01], [t7].[IS_GROUPLEADER_01], [t7].[GROUPID_02], [t7].[IS_GROUPLEADER_02],[t7].[IS_ACTIVE2], [t7].[[_DOCUMENTS]]2] AS [_DOCUMENTS2], [t7].[[_SEO]]2] AS [_SEO2], [t7].[[_METADATA]]2] AS [_METADATA2], [t7].[test2], [t7].[ID3], [t7].[CODE3], [t7].[[_TRANSLATIONS]]2] AS [_TRANSLATIONS2], [t7].[[_METADATA]]3] AS [_METADATA3], [t7].[test3], [t7].[ID4], [t7].[ID_LINE], [t7].[ID_GROUP], [t7].[ID_CLASS], [t7].[ID_FAM], [t7].[ID_ARTICLE]
FROM (
    SELECT ROW_NUMBER() OVER (ORDER BY [t0].[ID], [t0].[ID_BRAND], [t0].[CODE], [t0].[CODFOR], [t0].[COD_ALT01], [t0].[COD_ALT02], [t0].[COD_ALT03], [t0].[ID_UOM], [t0].[IS_ACTIVE], [t0].[_ATTRIBUTES], [t0].[_DOCUMENTS], [t0].[_SEO], [t0].[_TRANSLATIONS], [t0].[_TAGS], [t0].[_NOTES], [t0].[_METADATA], [t0].[IS_B2B], [t0].[IS_B2C], [t0].[IS_PROMO], [t0].[IS_NEWS], [t0].[CAN_BE_RETURNED], [t0].[IS_SHIPPABLE], [t0].[HAS_SHIPPING_COSTS], [t0].[IS_PURCHEASABLE], [t2].[test], [t2].[ID], [t2].[CODE], [t2].[BUSINESS_NAME], [t2].[NAME], [t2].[PHONE_01], [t2].[PHONE_02], [t2].[PHONE_03], [t2].[FAX_01], [t2].[FAX_02], [t2].[COUNTRY_01], [t2].[CITY_01], [t2].[ADDRESS_01], [t2].[COUNTRY_02], [t2].[CITY_02], [t2].[ADDRESS_02], [t2].[EMAIL_01], [t2].[EMAIL_02], [t2].[PEC], [t2].[SITE_01], [t2].[SITE_02], [t2].[SITE_03], [t2].[SITE_04], [t2].[VAT_NUMBER], [t2].[SORT], [t2].[GROUPID_01], [t2].[IS_GROUPLEADER_01], [t2].[GROUPID_02], [t2].[IS_GROUPLEADER_02], [t2].[IS_ACTIVE], [t2].[_DOCUMENTS], [t2].[_SEO], [t2].[_METADATA], [t4].[test], [t4].[ID], [t4].[CODE], [t4].[_TRANSLATIONS], [t4].[_METADATA], [t6].[test], [t6].[ID], [t6].[ID_LINE], [t6].[ID_GROUP], [t6].[ID_CLASS], [t6].[ID_FAM], [t6].[ID_ARTICLE]) AS [ROW_NUMBER], [t0].[ID], [t0].[ID_BRAND], [t0].[CODE], [t0].[CODFOR], [t0].[COD_ALT01], [t0].[COD_ALT02], [t0].[COD_ALT03], [t0].[ID_UOM], [t0].[IS_ACTIVE], [t0].[_ATTRIBUTES], [t0].[_DOCUMENTS], [t0].[_SEO], [t0].[_TRANSLATIONS], [t0].[_TAGS], [t0].[_NOTES], [t0].[_METADATA], [t0].[IS_B2B], [t0].[IS_B2C], [t0].[IS_PROMO], [t0].[IS_NEWS], [t0].[CAN_BE_RETURNED], [t0].[IS_SHIPPABLE], [t0].[HAS_SHIPPING_COSTS], [t0].[IS_PURCHEASABLE], [t2].[test], [t2].[ID] AS [ID2], [t2].[CODE] AS [CODE2], [t2].[BUSINESS_NAME], [t2].[NAME], [t2].[PHONE_01], [t2].[PHONE_02], [t2].[PHONE_03], [t2].[FAX_01], [t2].[FAX_02], [t2].[COUNTRY_01], [t2].[CITY_01], [t2].[ADDRESS_01], [t2].[COUNTRY_02], [t2].[CITY_02], [t2].[ADDRESS_02], [t2].[EMAIL_01], [t2].[EMAIL_02], [t2].[PEC], [t2].[SITE_01], [t2].[SITE_02], [t2].[SITE_03], [t2].[SITE_04], [t2].[VAT_NUMBER], [t2].[SORT], [t2].[GROUPID_01], [t2].[IS_GROUPLEADER_01], [t2].[GROUPID_02], [t2].[IS_GROUPLEADER_02], [t2].[IS_ACTIVE] AS [IS_ACTIVE2], [t2].[_DOCUMENTS] AS [[_DOCUMENTS]]2], [t2].[_SEO] AS [[_SEO]]2], [t2].[_METADATA] AS [[_METADATA]]2], [t4].[test] AS [test2], [t4].[ID] AS [ID3], [t4].[CODE] AS [CODE3], [t4].[_TRANSLATIONS] AS [[_TRANSLATIONS]]2], [t4].[_METADATA] AS [[_METADATA]]3], [t6].[test] AS [test3], [t6].[ID] AS [ID4], [t6].[ID_LINE], [t6].[ID_GROUP], [t6].[ID_CLASS], [t6].[ID_FAM], [t6].[ID_ARTICLE]
    FROM [dbo].[tbl_ana_Articles] AS [t0]
    LEFT OUTER JOIN (
        SELECT 1 AS [test], [t1].[ID], [t1].[CODE], [t1].[BUSINESS_NAME], [t1].[NAME], [t1].[PHONE_01], [t1].[PHONE_02], [t1].[PHONE_03], [t1].[FAX_01], [t1].[FAX_02], [t1].[COUNTRY_01], [t1].[CITY_01], [t1].[ADDRESS_01], [t1].[COUNTRY_02], [t1].[CITY_02], [t1].[ADDRESS_02], [t1].[EMAIL_01], [t1].[EMAIL_02], [t1].[PEC], [t1].[SITE_01], [t1].[SITE_02], [t1].[SITE_03], [t1].[SITE_04], [t1].[VAT_NUMBER], [t1].[SORT], [t1].[GROUPID_01], [t1].[IS_GROUPLEADER_01], [t1].[GROUPID_02], [t1].[IS_GROUPLEADER_02], [t1].[IS_ACTIVE], [t1].[_DOCUMENTS], [t1].[_SEO], [t1].[_METADATA]
        FROM [dbo].[tbl_ana_Brands] AS [t1]
        ) AS [t2] ON [t2].[ID] = [t0].[ID_BRAND]
    LEFT OUTER JOIN (
        SELECT 1 AS [test], [t3].[ID], [t3].[CODE], [t3].[_TRANSLATIONS], [t3].[_METADATA]
        FROM [dbo].[tbl_ana_UoMs] AS [t3]
        ) AS [t4] ON [t4].[ID] = [t0].[ID_UOM]
    LEFT OUTER JOIN (
        SELECT 1 AS [test], [t5].[ID], [t5].[ID_LINE], [t5].[ID_GROUP], [t5].[ID_CLASS], [t5].[ID_FAM], [t5].[ID_ARTICLE]
        FROM [dbo].[tbl_src_ArticlesCategories] AS [t5]
        ) AS [t6] ON [t6].[ID_ARTICLE] = [t0].[ID]
    WHERE (
        (CASE 
            WHEN 1 = 1 THEN CONVERT(Int,[t0].[IS_ACTIVE])
            ELSE 0
         END)) = 1
    ) AS [t7]
WHERE [t7].[ROW_NUMBER]  BETWEEN 7272 + 1 AND 7284
ORDER BY [t7].[ROW_NUMBER]

这里慢查询执行计划:https ://www.brentozar.com/pastetheplan/?id=Sk-rLnY3F

修订后的快速查询(约 7K 元素的数据集,占用约 0 秒):

SELECT [t7].[ID], [t7].[ID_BRAND], [t7].[CODE], [t7].[CODFOR], [t7].[COD_ALT01], [t7].[COD_ALT02], [t7].[COD_ALT03], [t7].[ID_UOM], [t7].[IS_ACTIVE], [t7].[_ATTRIBUTES] AS [_ATTRIBUTES], [t7].[_DOCUMENTS] AS [_DOCUMENTS], [t7].[_SEO] AS [_SEO], [t7].[_TRANSLATIONS] AS [_TRANSLATIONS], [t7].[_TAGS] AS [_TAGS], [t7].[_NOTES] AS [_NOTES], [t7].[_METADATA] AS [_METADATA], [t7].[IS_B2B], [t7].[IS_B2C], [t7].[IS_PROMO], [t7].[IS_NEWS], [t7].[CAN_BE_RETURNED], [t7].[IS_SHIPPABLE], [t7].[HAS_SHIPPING_COSTS], [t7].[IS_PURCHEASABLE], [t7].[test], [t7].[ID2], [t7].[CODE2], [t7].[BUSINESS_NAME], [t7].[NAME], [t7].[PHONE_01], [t7].[PHONE_02], [t7].[PHONE_03], [t7].[FAX_01], [t7].[FAX_02], [t7].[COUNTRY_01], [t7].[CITY_01], [t7].[ADDRESS_01], [t7].[COUNTRY_02], [t7].[CITY_02], [t7].[ADDRESS_02], [t7].[EMAIL_01], [t7].[EMAIL_02], [t7].[PEC], [t7].[SITE_01], [t7].[SITE_02], [t7].[SITE_03], [t7].[SITE_04], [t7].[VAT_NUMBER], [t7].[SORT], [t7].[GROUPID_01], [t7].[IS_GROUPLEADER_01], [t7].[GROUPID_02], [t7].[IS_GROUPLEADER_02],[t7].[IS_ACTIVE2], [t7].[[_DOCUMENTS]]2] AS [_DOCUMENTS2], [t7].[[_SEO]]2] AS [_SEO2], [t7].[[_METADATA]]2] AS [_METADATA2], [t7].[test2], [t7].[ID3], [t7].[CODE3], [t7].[[_TRANSLATIONS]]2] AS [_TRANSLATIONS2], [t7].[[_METADATA]]3] AS [_METADATA3], [t7].[test3], [t7].[ID4], [t7].[ID_LINE], [t7].[ID_GROUP], [t7].[ID_CLASS], [t7].[ID_FAM], [t7].[ID_ARTICLE]
FROM (
    SELECT ROW_NUMBER() OVER (ORDER BY [t0].[ID]) AS [ROW_NUMBER], [t0].[ID], [t0].[ID_BRAND], [t0].[CODE], [t0].[CODFOR], [t0].[COD_ALT01], [t0].[COD_ALT02], [t0].[COD_ALT03], [t0].[ID_UOM], [t0].[IS_ACTIVE], [t0].[_ATTRIBUTES], [t0].[_DOCUMENTS], [t0].[_SEO], [t0].[_TRANSLATIONS], [t0].[_TAGS], [t0].[_NOTES], [t0].[_METADATA], [t0].[IS_B2B], [t0].[IS_B2C], [t0].[IS_PROMO], [t0].[IS_NEWS], [t0].[CAN_BE_RETURNED], [t0].[IS_SHIPPABLE], [t0].[HAS_SHIPPING_COSTS], [t0].[IS_PURCHEASABLE], [t2].[test], [t2].[ID] AS [ID2], [t2].[CODE] AS [CODE2], [t2].[BUSINESS_NAME], [t2].[NAME], [t2].[PHONE_01], [t2].[PHONE_02], [t2].[PHONE_03], [t2].[FAX_01], [t2].[FAX_02], [t2].[COUNTRY_01], [t2].[CITY_01], [t2].[ADDRESS_01], [t2].[COUNTRY_02], [t2].[CITY_02], [t2].[ADDRESS_02], [t2].[EMAIL_01], [t2].[EMAIL_02], [t2].[PEC], [t2].[SITE_01], [t2].[SITE_02], [t2].[SITE_03], [t2].[SITE_04], [t2].[VAT_NUMBER], [t2].[SORT], [t2].[GROUPID_01], [t2].[IS_GROUPLEADER_01], [t2].[GROUPID_02], [t2].[IS_GROUPLEADER_02], [t2].[IS_ACTIVE] AS [IS_ACTIVE2], [t2].[_DOCUMENTS] AS [[_DOCUMENTS]]2], [t2].[_SEO] AS [[_SEO]]2], [t2].[_METADATA] AS [[_METADATA]]2], [t4].[test] AS [test2], [t4].[ID] AS [ID3], [t4].[CODE] AS [CODE3], [t4].[_TRANSLATIONS] AS [[_TRANSLATIONS]]2], [t4].[_METADATA] AS [[_METADATA]]3], [t6].[test] AS [test3], [t6].[ID] AS [ID4], [t6].[ID_LINE], [t6].[ID_GROUP], [t6].[ID_CLASS], [t6].[ID_FAM], [t6].[ID_ARTICLE]
    FROM [dbo].[tbl_ana_Articles] AS [t0]
    LEFT OUTER JOIN (
        SELECT 1 AS [test], [t1].[ID], [t1].[CODE], [t1].[BUSINESS_NAME], [t1].[NAME], [t1].[PHONE_01], [t1].[PHONE_02], [t1].[PHONE_03], [t1].[FAX_01], [t1].[FAX_02], [t1].[COUNTRY_01], [t1].[CITY_01], [t1].[ADDRESS_01], [t1].[COUNTRY_02], [t1].[CITY_02], [t1].[ADDRESS_02], [t1].[EMAIL_01], [t1].[EMAIL_02], [t1].[PEC], [t1].[SITE_01], [t1].[SITE_02], [t1].[SITE_03], [t1].[SITE_04], [t1].[VAT_NUMBER], [t1].[SORT], [t1].[GROUPID_01], [t1].[IS_GROUPLEADER_01], [t1].[GROUPID_02], [t1].[IS_GROUPLEADER_02], [t1].[IS_ACTIVE], [t1].[_DOCUMENTS], [t1].[_SEO], [t1].[_METADATA]
        FROM [dbo].[tbl_ana_Brands] AS [t1]
        ) AS [t2] ON [t2].[ID] = [t0].[ID_BRAND]
    LEFT OUTER JOIN (
        SELECT 1 AS [test], [t3].[ID], [t3].[CODE], [t3].[_TRANSLATIONS], [t3].[_METADATA]
        FROM [dbo].[tbl_ana_UoMs] AS [t3]
        ) AS [t4] ON [t4].[ID] = [t0].[ID_UOM]
    LEFT OUTER JOIN (
        SELECT 1 AS [test], [t5].[ID], [t5].[ID_LINE], [t5].[ID_GROUP], [t5].[ID_CLASS], [t5].[ID_FAM], [t5].[ID_ARTICLE]
        FROM [dbo].[tbl_src_ArticlesCategories] AS [t5]
        ) AS [t6] ON [t6].[ID_ARTICLE] = [t0].[ID]
    WHERE (
        (CASE 
            WHEN 1 = 1 THEN CONVERT(Int,[t0].[IS_ACTIVE])
            ELSE 0
         END)) = 1
    ) AS [t7]
WHERE [t7].[ROW_NUMBER] BETWEEN 7272 + 1 AND 7284
ORDER BY [t7].[ROW_NUMBER]

这里是快速查询执行计划:https ://www.brentozar.com/pastetheplan/?id=B10l82K2Y

注意:所有查询代码均由 ORM 自动生成

这两个查询看起来非常相似,我不清楚是什么显着提高了性能。我真的很感谢对 SQL Server 有这么大帮助的提示,这样我就可以更好地理解将来如何调整 ORM。

sql-server performance-tuning
  • 1 个回答
  • 932 Views
Martin Hope
Kekar
Asked: 2021-11-12 02:13:08 +0800 CST

Cassandra 中的查询性能改进

  • 1

我的 Cassandra 数据库中有一张桌子。

CREATE TABLE table (
    pk uuid,
    status int,
    location text,
    type text,
    id  text,
updatedtimestamp timestamp, 
        PRIMARY KEY (pk)
);

CREATE INDEX  tablelocation ON table (location);
CREATE INDEX  tabletype ON table (type);
CREATE INDEX  tableid ON table (id);
CREATE INDEX  tableupdatedtimestamp ON table (updatedtimestamp);

我运行的查询是:

Select * from table 
where location='A1' 
and type='T1' 
and status=001 
and id='NA' 
allow filtering;

Cassandra 需要超过 5 秒的时间来为该查询返回 4000 条记录。我已经在所有这些列上都有二级索引。根据 DBA,问题是id='NA'有条件的。此条件为真的行太多。

但是,这种情况是由于业务用例而存在的,如果没有其他机制来过滤该值,就无法删除该条件。

我正在考虑创建一个包含所有 4 列的新索引。但是,我担心它会妨碍写入性能。状态栏会非常频繁地更新。

我们可以做些什么来调整这个查询的性能吗?

query-performance performance-tuning
  • 2 个回答
  • 556 Views
Martin Hope
AAA
Asked: 2021-11-09 07:05:59 +0800 CST

性能监视器中的 Sql Server 磁盘 I/O 吞吐量

  • 0

我使用 sql server 2019 并在我的数据库中启用了跟踪标志 1117(平等地增长文件组中的所有文件)。我需要根据我的系统资源考虑主文件组的正确数据文件数量。为此,我需要使用性能监控软件检查此操作。但我不知道应该使用哪些计数器(例如每秒写入磁盘)

第一次测试:

CREATE DATABASE TestIO
ON PRIMARY 
    ( NAME = N'PRIMARY1',            FILENAME = N'D:\DB\Temp\TestIO_PRIMARY1.mdf',FILEGROWTH=512GB,MAXSIZE=UNLIMITED,SIZE=2GB),
 FILEGROUP FG2 
    ( NAME = N'secondary',          FILENAME = N'D:\DB\Temp\TestIO_secondary.ndf',FILEGROWTH=512GB,MAXSIZE=UNLIMITED,SIZE=2GB)
 LOG ON 
    ( NAME = N'TestIO_log',     FILENAME = N'D:\DB\Temp\TEST_log.ldf' ,FILEGROWTH=2GB,MAXSIZE=2TB,SIZE=2GB)
GO

第二次测试:

USE master
GO
DROP DATABASE IF EXISTS TestIO
CREATE DATABASE TestIO
ON PRIMARY 
    ( NAME = N'PRIMARY1',            FILENAME = N'D:\DB\Temp\TestIO_PRIMARY1.mdf',FILEGROWTH=512GB,MAXSIZE=UNLIMITED,SIZE=2GB),
    ( NAME = N'PRIMARY2',            FILENAME = N'D:\DB\Temp\TestIO_PRIMARY2.mdf',FILEGROWTH=512GB,MAXSIZE=UNLIMITED,SIZE=2GB), 
    ( NAME = N'PRIMARY3',            FILENAME = N'D:\DB\Temp\TestIO_PRIMARY3.mdf',FILEGROWTH=512GB,MAXSIZE=UNLIMITED,SIZE=2GB), 
    ( NAME = N'PRIMARY4',            FILENAME = N'D:\DB\Temp\TestIO_PRIMARY4.mdf',FILEGROWTH=512GB,MAXSIZE=UNLIMITED,SIZE=2GB), 
 FILEGROUP FG2 
    ( NAME = N'secondary',          FILENAME = N'D:\DB\Temp\TestIO_secondary.ndf',FILEGROWTH=512GB,MAXSIZE=UNLIMITED,SIZE=2GB)
 LOG ON 
    ( NAME = N'TestIO_log',     FILENAME = N'D:\DB\Temp\TEST_log.ldf' ,FILEGROWTH=2GB,MAXSIZE=2TB,SIZE=2GB)
GO
sql-server-2019 performance-tuning
  • 2 个回答
  • 152 Views
Martin Hope
amanullah
Asked: 2021-10-06 22:29:24 +0800 CST

DBCC 收缩文件历史

  • 1

我通过 GUI 运行收缩文件(.mdf),方法是选择任务 -> 收缩 -> 文件并选择第二个选项,即在释放未使用空间之前重新组织页面。现在,我想知道我针对 50dbs 运行的压缩文件命令的历史。例如以下

  1. 它回收了多少空间
  2. .mdf 文件空间在运行收缩文件命令之前和运行之后。

谢谢

sql-server performance-tuning
  • 2 个回答
  • 71 Views
Martin Hope
jericzech
Asked: 2021-09-10 05:08:15 +0800 CST

sp_HumanEvents @event_type = N'blocking' 需要记录作业帮助

  • 3

那些使用优秀 sp_HumanEvents 的人,也许是作者本人,请帮助我理解我缺少的东西(由于我的愚蠢)。

#1

监控阻塞时,必须设置阻塞进程阈值(以秒为单位),否则阻塞进程事件不会被触发。这与@blocking_duration_ms 参数有何关联?

示例:阻塞进程阈值 ID 设置为 10 秒 @blocking_duration_ms 保留默认值 = 500 毫秒

#2

当我想在服务器重新启动时连续且独立地将结果记录到表中时,建议我使用代理作业和一个示例,该示例设置一个名称为sp_HumanEvents: 10 second Check In 但在周日午夜重复运行的计划。当 SQL Server 代理启动更合适时不会自动启动它吗?

sql-server performance-tuning
  • 1 个回答
  • 127 Views
Martin Hope
D-K
Asked: 2021-05-29 09:51:55 +0800 CST

无论时间桶宽度和数据大小如何,经过良好调整的 ETL 报告流程的 T 和 L 阶段是否应该花费相同的时间?

  • 1

对于 ETL 报告系统,15 分钟无数据拉取的总执行时间类似于 24 小时有数据拉取的总执行时间是否正常?


我曾预计没有数据时 ETL 的总时间会更短,但这不是 15 分钟到 24 小时拉动之间的情况。但我必须承认,我对报表服务器中 T 和 L 阶段的内部结构一无所知。

有人可以阐明 T 和 L 阶段的持续时间是否通常是固定的(直到某个点)?

sql-server performance-tuning
  • 3 个回答
  • 152 Views
Martin Hope
peppy
Asked: 2021-05-18 13:24:00 +0800 CST

需要 MySQL 8.0 调整建议 - 缓慢锁定和各种问题

  • 3

我最近将我的数据库升级到 MySQL 8.0,我正在使用 phpmyadmin 来获取状态信息。我还将我的虚拟服务器升级到 4GB RAM 和 2 个 vCPU,旨在成为我网站的 MySQL 专用服务器。MySQL 在服务器上自行运行,我在单独的服务器上拥有 php 和其他所有内容。

问题:我的服务器上的内存使用量似乎随着时间的推移而逐渐增加。它通常运行良好,但由于 OOM 杀手,几天后会崩溃。这些可能很难看,有时 MySQL 不会重新启动几个小时(锁定/冻结/等),即使我的 cronjob 脚本每 5 分钟检查一次 MySQL 是否正在运行并重新启动它没有运行。我的网站有时会在我醒来之前整个晚上/早上都关闭,并且我将被迫重新启动操作系统几次,然后才能重新开始工作。

网站上的速度变慢也存在问题,这些问题似乎在没有警告的情况下发生,没有任何明显的原因,慢查询日志中没有任何内容,网站流量缓慢,可用内存充足。这些发生大约一个小时,问题自行消失。发生这种情况时,由于 MySQL 的问题,加载网页可能需要 20-30 秒。

我调查了不使用索引运行的慢查询日志和查询。经过调查,事实证明其中许多涉及国家/州的 200 行小表格,我们在其中选择大部分表格并将其显示在网站上,如设计的那样(这就是为什么其中许多出现在“查询不使用索引") 列表。否则,除了从小表中选择大量内容之外,日志的该部分没有其他内容。

以下是来自 phpmyadmin 的一些数据(5 月 21 日更新):

Network traffic since startup: 165.1 GiB
This MySQL server has been running for 3 days, 15 hours, 58 minutes and 2 seconds. It started up on May 18, 2021 at 05:43 AM.

Traffic     #   ø per hour
Received    5.3 GiB 61.4 MiB
Sent    159.8 GiB   1.8 GiB
Total   165.1 GiB   1.9 GiB
Connections #   ø per hour  %
Max. concurrent connections 32  --- ---
Failed attempts 25  0.28    <0.01%
Aborted 0   0   0%
Total   2,494 k 28.35 k 100.00%

警报状态变量(由 phpmyadmin 标记为红色) - 5 月 21 日更新:

Aborted connectsDocumentation   25  The number of failed attempts to connect to the MySQL server.
Binlog cache disk useDocumentation  19.5 k  The number of transactions that used the temporary binary log cache but that exceeded the value of binlog_cache_size and used a temporary file to store statements from the transaction.
Handler read rndDocumentation   70.2 M  The number of requests to read a row based on a fixed position. This is high if you are doing a lot of queries that require sorting of the result. You probably have a lot of queries that require MySQL to scan whole tables or you have joins that don't use keys properly.
Handler read rnd nextDocumentation  5.9 G   The number of requests to read the next row in the data file. This is high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have.
Innodb buffer pool pages dirtyDocumentation 20  The number of pages currently dirty.
Innodb buffer pool readsDocumentation   6.8 M   The number of logical reads 
that InnoDB could not satisfy from buffer pool and had to do a single-page read.
Innodb buffer pool wait freeDocumentation   3   Normally, writes to the InnoDB buffer pool happen in the background. However, if it's necessary to read or create a page and no clean pages are available, it's necessary to wait for pages to be flushed first. This counter counts instances of these waits. If the buffer pool size was set properly, this value should be small.
Innodb row lock time avgDocumentation   911 The average time to acquire a row lock, in milliseconds.
Innodb row lock time maxDocumentation   31.9 k  The maximum time to acquire a row lock, in milliseconds.
Innodb row lock waitsDocumentation  228 The number of times a row lock had to be waited for.
Opened tablesDocumentation  7.9 k   The number of tables that have been opened. If opened tables is big, your table cache value is probably too small.
Select full joinDocumentation   203 k   The number of joins that do not use indexes. If this value is not 0, you should carefully check the indexes of your tables.
Slow queriesDocumentation   43  The number of queries that have taken more than long_query_time seconds.Documentation
Sort merge passesDocumentation  4.3 k   The number of merge passes the sort algorithm has had to do. If this value is large, you should consider increasing the value of the sort_buffer_size system variable.
Table locks waitedDocumentation 1.5 k   The number of times that a table lock could not be acquired immediately and a wait was needed. If this is high, and you have performance problems, you should first optimize your queries, and then either split your table or tables or use replication.

查询统计信息(5 月 21 日更新):

Questions since startup: 35,646,301 Documentation
ø per hour: 405,017
ø per minute: 6,750
ø per second: 113
Statements  #   ø per hour  %
select  33,914 k    385.3 k 95.14
update  568 k   6,448.2 1.59
insert  349 k   3,968.7 0.98
change db   337 k   3,826.6 0.94
set option  303 k   3,447.3 0.85
replace 136 k   1,545.2 0.38
delete  14,064  159.8   0.04
update multi    4,827   54.8    0.01
show fields 2,940   33.4    0.01
truncate    2,163   24.6    0.01
show status 2,092   23.8    0.01
show replica status 2,092   23.8    0.01
show slave status   2,092   23.8    0.01
show master status  2,091   23.8    0.01
show processlist    2,047   23.3    0.01
show create table   1,059   12  <0.01
show table status   979 11.1    <0.01
rollback to savepoint   957 10.9    <0.01
show triggers   957 10.9    <0.01
show keys   335 3.8 <0.01
show variables  272 3.1 <0.01
show tables 119 1.4 <0.01
create table    64  0.7 <0.01
show warnings   61  0.7 <0.01
insert select   37  0.4 <0.01
drop table  30  0.3 <0.01
delete multi    26  0.3 <0.01
unlock tables   15  0.2 <0.01
begin   15  0.2 <0.01
show create db  15  0.2 <0.01
savepoint   15  0.2 <0.01
show create trigger 12  0.1 <0.01
release savepoint   12  0.1 <0.01
show grants 8   0.1 <0.01
show binlogs    8   0.1 <0.01
show databases  5   0.1 <0.01
kill    4   <0.1    <0.01
show storage engines    2   <0.1    <0.01
show slave hosts    1   <0.1    <0.01
show replicas   1   <0.1    <0.01
flush   1   <0.1    <0.01
create db   1   <0.1    <0.01

MySQL 配置文件 my.cnf。几年前,一位专业的 DBA 告诉我设置这些变量来调整 1GB 服务器的 mysql - 以处理 Out Of Memory 崩溃。我最近更改的唯一变量是 innodb_buffer_pool_size 从 512MB 到 2G(5 月 21 日更新,添加了“skip-name-resolve”以修复我在 mysqltuner 上发现的错误):

[mysqld]
skip-name-resolve
default_authentication_plugin = mysql_native_password
character_set_server=latin1
collation_server=latin1_swedish_ci
port = 3306
sql_mode = "NO_ENGINE_SUBSTITUTION"
innodb_buffer_pool_size = 2000M
innodb_strict_mode = OFF
join_buffer_size = 1M
key_buffer_size = 64M
max_connect_errors = 10000
myisam_recover_options = "BACKUP,FORCE"
performance_schema = 0
read_buffer_size = 1M
slow_query_log = ON
sort_buffer_size = 1M
sync_binlog = 0
thread_stack = 262144
wait_timeout = 14400
table_open_cache = 10000
table_definition_cache = 2500
open_files_limit = 30000
max_connections = 100
read_rnd_buffer_size = 128K
innodb_change_buffer_max_size = 15
innodb_log_buffer_size = 12M
innodb_log_file_size = 120M
innodb_buffer_pool_instances = 8
innodb_lru_scan_depth = 128
innodb_page_cleaners = 64
thread_cache_size = 50
max_heap_table_size=24M
tmp_table_size=24M
thread_cache_size=100
innodb_io_capacity=800
read_buffer_size=128K
read_rnd_buffer_size=64K
eq_range_index_dive_limit=32
symbolic-links=0
key_cache_age_threshold=64800
key_cache_division_limit=50
key_cache_block_size=32K
innodb_buffer_pool_dump_pct=90
innodb_print_all_deadlocks=ON
innodb_read_ahead_threshold=8
innodb_read_io_threads=64
innodb_write_io_threads=64
max_allowed_packet=32M
max_seeks_for_key=32
max_write_lock_count=16
myisam_repair_threads=4
open_files_limit=30000
query_alloc_block_size=32K
query_prealloc_size=32K
sort_buffer_size=2M
updatable_views_with_limit=NO
general_log_file=/var/log/mysql/general.log
slow_query_log_file=/var/log/mysql/slow-query.log

更新

我决定在我的所有数据库中将所有 MyISAM 表分阶段到 Innodb。不再有 MyISAM 表。希望这可以简化调整工作。

更新 2:Pastebins

3天前,我逐步淘汰了所有MyISAM表并制作了InnoDB表,在my.cnf中添加了“skip-name-resolve”并重新启动了服务器。我已经从 phpmyadmin 更新了上面的其余信息,截至 5 月 21 日,今天添加了这些 pastebins 以及在服务器运行 2-3 天后的新数据。

显示全球状态:https ://pastebin.com/r3t84pvZ

显示全局变量:https ://pastebin.com/kQpevtdx

显示完整的处理程序:https ://pastebin.com/fR6b7Tdg

状态:https ://pastebin.com/vyWyhZSf

MySQL 调谐器:https ://pastebin.com/ETLCa48V

顶部:https ://pastebin.com/cU8RvgpT

ulimit -a:https ://pastebin.com/BhNVgEXH

iostat -xm 5 3:https ://pastebin.com/MxymEXyq

/proc/meminfo:https://pastebin.com/PKKeumyt _

mysql performance-tuning
  • 2 个回答
  • 1907 Views
Martin Hope
std_unordered_map
Asked: 2020-12-10 02:40:46 +0800 CST

Sqlite:在由整数元组组成的表中查找下一个或上一个元素

  • 8

我有一个名为 tuples 的 sqlite 表,定义如下

create table tuples
(
    a INTEGER not null,
    b INTEGER not null,
    c INTEGER not null,
    d INTEGER not null,
    primary key (a, b, c, d)
) without rowid;

充满了数百万个独特的元组(> 1TB)。新元组经常被插入,带有“随机”值。仅在极少数情况下才会删除行。

对于访问数据库的外部进程,我需要在表中找到“下一个”或“上一个”现有的 4 元组。

例如:给定元组 (1-1-1-1)、(1-1-1-4) 和 (1-2-3-4),对于元组 (1-1-1-3)(它确实不需要存在于表中)“下一个”元素是(1-1-1-4),前一个是(1-1-1-1)(两者都需要存在)。对于 (1-1-1-4) (1-2-3-4) 是“下一个”元素。极端情况:如果实际上没有“下一个”或“上一个”元素,则结果允许为空。(1-2-3-4) 没有“下一个”元素。

目前我试图找到下一个元组 ("center" is (1-1-1-3))

select a,b,c,d from tuple
where (a == 1 AND b == 1 AND c == 1 AND d > 3) OR
      (a == 1 AND b == 1 AND c > 1) OR
      (a == 1 AND b > 1) OR
      (a > 1)
order by a, b, c, d
limit 1;

这真的很慢。

这里的简短问题是:有没有办法加快这个过程?理想情况下,响应应该只需要几毫秒,例如搜索元组的确切值(基本上是瞬时的)。使用其他/更多索引、多个和/或其他查询,甚至更改数据库结构都是有效的解决方案。


编辑:元组的每个元素都可以覆盖整个允许的整数范围。

query-performance performance-tuning
  • 2 个回答
  • 293 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目

    • 12 个回答
  • Marko Smith

    如何让sqlplus的输出出现在一行中?

    • 3 个回答
  • Marko Smith

    选择具有最大日期或最晚日期的日期

    • 3 个回答
  • Marko Smith

    如何列出 PostgreSQL 中的所有模式?

    • 4 个回答
  • Marko Smith

    列出指定表的所有列

    • 5 个回答
  • Marko Smith

    如何在不修改我自己的 tnsnames.ora 的情况下使用 sqlplus 连接到位于另一台主机上的 Oracle 数据库

    • 4 个回答
  • Marko Smith

    你如何mysqldump特定的表?

    • 4 个回答
  • Marko Smith

    使用 psql 列出数据库权限

    • 10 个回答
  • Marko Smith

    如何从 PostgreSQL 中的选择查询中将值插入表中?

    • 4 个回答
  • Marko Smith

    如何使用 psql 列出所有数据库和表?

    • 7 个回答
  • Martin Hope
    Jin 连接到 PostgreSQL 服务器:致命:主机没有 pg_hba.conf 条目 2014-12-02 02:54:58 +0800 CST
  • Martin Hope
    Stéphane 如何列出 PostgreSQL 中的所有模式? 2013-04-16 11:19:16 +0800 CST
  • Martin Hope
    Mike Walsh 为什么事务日志不断增长或空间不足? 2012-12-05 18:11:22 +0800 CST
  • Martin Hope
    Stephane Rolland 列出指定表的所有列 2012-08-14 04:44:44 +0800 CST
  • Martin Hope
    haxney MySQL 能否合理地对数十亿行执行查询? 2012-07-03 11:36:13 +0800 CST
  • Martin Hope
    qazwsx 如何监控大型 .sql 文件的导入进度? 2012-05-03 08:54:41 +0800 CST
  • Martin Hope
    markdorison 你如何mysqldump特定的表? 2011-12-17 12:39:37 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 对 SQL 查询进行计时? 2011-06-04 02:22:54 +0800 CST
  • Martin Hope
    Jonas 如何从 PostgreSQL 中的选择查询中将值插入表中? 2011-05-28 00:33:05 +0800 CST
  • Martin Hope
    Jonas 如何使用 psql 列出所有数据库和表? 2011-02-18 00:45:49 +0800 CST

热门标签

sql-server mysql postgresql sql-server-2014 sql-server-2016 oracle sql-server-2008 database-design query-performance sql-server-2017

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve