读取已提交快照隔离 (RCSI) 已广为人知。它可能导致性能瓶颈的主要原因是版本链太长。是否有任何等待类型表明此特定问题?或者它仅由其他等待类型显示(如果是,是什么?)。
我有这两张桌子
CREATE TABLE `users` (
`id` bigint NOT NULL AUTO_INCREMENT,
`status` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=330031656 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ;
CREATE TABLE `user_meta` (
`id` int NOT NULL AUTO_INCREMENT,
`user_id` bigint NOT NULL,
`meta_id` bigint NOT NULL,
`value` bigint NOT NULL,
PRIMARY KEY (`id`),
KEY `usermeta_user_id_meta_type_meta_value` (`user_id`,`meta_id`,`value`),
CONSTRAINT `user_meta_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=16728 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ;
客户要求,他们希望根据value
,对特定的进行排序meta_id
。例如,假设有 10meta_id
个,客户选择meta_id=1111
。在这种情况下,所有已关联的用户meta_id=1111
都应按其第一顺序排序value
,而其他没有关联的用户meta_id=1111
可以按任何顺序排在底部。
曾经有人问过类似的问题,我也试图根据他们的答案提出疑问,但对我来说似乎不起作用。
这个答案和这个也讨论了使用if-else
和case-when-then
语句,但是当我尝试其中任何一个时
select u.id, um.meta_id, um.value from users u
inner join user_meta um on um.user_id = u.id
order by if(um.meta_id=1111, value, 1);
select u.id, um.meta_id, um.value from users u
inner join user_meta um on um.user_id = u.id
order by case um.meta_id
when 1111 then value else 1 end;
select u.id, um.meta_id, um.value from users u
inner join user_meta um on um.user_id = u.id
order by case
when um.meta_id = 1111 then value else u.id end;
我明白了
+-----------+---------+------------+
| id | meta_id | value |
+-----------+---------+------------+
| 326480529 | 200 | 1730358000 |
| 326850494 | 1111 | 1730185200 |
| 326785127 | 1111 | 1730271600 |
| 326833934 | 1111 | 1730358000 |
| 326467136 | 1111 | 1730358000 |
| 328079379 | 1111 | 1730793600 |
+-----------+---------+------------+
我希望所有带有 的用户都meta_id=1111
排在最上面,但他们既没有排在最上面,也没有在他们自己内部进行排序。同样,对于desc
顺序,带有 的用户meta_id=1111
应该排在最上面,按降序排列,而其他所有用户都可以排在最下面,例如
+-----------+---------+------------+
| id | meta_id | value |
+-----------+---------+------------+
| 328079379 | 1111 | 1730793600 |
| 326833934 | 1111 | 1730358000 |
| 326467136 | 1111 | 1730358000 |
| 326785127 | 1111 | 1730271600 |
| 326850494 | 1111 | 1730185200 |
| 326480529 | 200 | 1730358000 |
+-----------+---------+------------+
非常感谢任何帮助或指导,以解决此问题。非常感谢!
我还发布了INSERT
两个表的一些语句,以便更轻松地在本地机器上复制
INSERT INTO `users` (`id`,`status`) VALUES (328079379,'active');
INSERT INTO `users` (`id`,`status`) VALUES (326833934,'active');
INSERT INTO `users` (`id`,`status`) VALUES (326467136,'deleted');
INSERT INTO `users` (`id`,`status`) VALUES (326785127,'inactive');
INSERT INTO `users` (`id`,`status`) VALUES (326850494,'removed');
INSERT INTO `users` (`id`,`status`) VALUES (326480529,'active');
INSERT INTO `user_meta` (`id`,`user_id`,`meta_id`,`value`) VALUES (13155,328079379,1111,1730793600);
INSERT INTO `user_meta` (`id`,`user_id`,`meta_id`,`value`) VALUES (13045,326833934,1111,1730358000);
INSERT INTO `user_meta` (`id`,`user_id`,`meta_id`,`value`) VALUES (13009,326467136,1111,1730358000);
INSERT INTO `user_meta` (`id`,`user_id`,`meta_id`,`value`) VALUES (13010,326785127,1111,1730271600);
INSERT INTO `user_meta` (`id`,`user_id`,`meta_id`,`value`) VALUES (13051,326850494,1111,1730185200);
INSERT INTO `user_meta` (`id`,`user_id`,`meta_id`,`value`) VALUES (13008,326480529,200,1730358000);
我正在创建一个名为 TestArticles 的分区表,根据发布年份 (publishDate) 指定多个文件组。此代码(不包括注释部分)执行正确。我的任务是向“哈希”字段添加唯一索引。当我尝试在表创建代码中执行此操作时,我收到以下错误:
“列‘publishDate’是索引‘UQ_Articles_hash’的分区列。唯一索引的分区列必须是索引键的子集。”
我可以从(id、publishDate、hash)创建一个复合主键 - 但这不是我的要求。
有没有办法将哈希指定为每个创建的文件组的唯一索引,或者在初始化整个表时将其指定为唯一索引?
USE Articles;
GO
ALTER DATABASE Articles
ADD FILEGROUP Articles2024;
ALTER DATABASE Articles
ADD FILEGROUP Articles2025;
ALTER DATABASE Articles
ADD FILEGROUP Articles2026;
ALTER DATABASE Articles
ADD FILEGROUP Articles2027;
CREATE PARTITION FUNCTION PF_Articles_PublishDate (DATETIME)
AS RANGE RIGHT FOR VALUES
(
'2024-01-01',
'2025-01-01',
'2026-01-01'
);
CREATE PARTITION SCHEME PS_Articles_PublishDate
AS PARTITION PF_Articles_PublishDate
TO
(
Articles2024,
Articles2025,
Articles2026,
Articles2027
);
CREATE TABLE TestArticles (
id INT NOT NULL,
path VARCHAR(200) NULL,
description VARCHAR(100) NOT NULL,
publishDate DATETIME NOT NULL,
hash BIGINT NOT NULL,
authorId INT NOT NULL,
CONSTRAINT PK_Articles PRIMARY KEY CLUSTERED (id, publishDate),
CONSTRAINT FK_Articles_Authors FOREIGN KEY (authorId) REFERENCES dbo.Authors(id)
--,CONSTRAINT UQ_Articles_hash UNIQUE NONCLUSTERED (hash)
) ON PS_Articles_PublishDate (publishDate);
数据库:MS SQL Server 2022 版本:16.0.1000.6 错误详情:消息 1908;级别 16;状态 1。
我正在尝试设置主从副本。这是为了在接下来的几天内从第一个节点到第二个节点进行无停机迁移。
主节点已设置完毕,配置已更新,我添加了密钥文件。对辅助节点执行了相同操作。
我还初始化了复制,并将辅助节点添加到复制中。我可以看到 ping 和正常运行时间,但辅助节点卡在“启动”状态。
但是,辅助节点上没有副本配置。此外,它一遍又一遍地在错误日志中重复此操作。
辅助节点(日志):
{"t":{"$date":"2024-11-14T11:35:14.018+00:00"},"s":"I", "c":"CONNPOOL", "id":22576, "ctx":"ReplNetwork","msg":"Connecting","attr":{"hostAndPort":"Ubuntu-2204-jammy-amd64-base:27017"}}
{"t":{"$date":"2024-11-14T11:35:14.818+00:00"},"s":"W", "c":"SHARDING", "id":7012500, "ctx":"QueryAnalysisConfigurationsRefresher","msg":"Failed to refresh query analysis configurations, will try again at the next interval","attr":{"error":"PrimarySteppedDown: No primary exists currently"}}
{"t":{"$date":"2024-11-14T11:35:15.000+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2024-11-14T11:35:15.000+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [config.transactions] not found."},"stats":{},"cmd":{"aggregate":"transactions","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"config"}}}
{"t":{"$date":"2024-11-14T11:35:15.001+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [config.image_collection] not found."},"stats":{},"cmd":{"aggregate":"image_collection","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"config"}}}
{"t":{"$date":"2024-11-14T11:35:15.018+00:00"},"s":"I", "c":"CONNPOOL", "id":22576, "ctx":"ReplNetwork","msg":"Connecting","attr":{"hostAndPort":"Ubuntu-2204-jammy-amd64-base:27017"}}
{"t":{"$date":"2024-11-14T11:35:16.000+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found."},"stats":{},"cmd":{"aggregate":"oplog.rs","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"local"}}}
{"t":{"$date":"2024-11-14T11:35:16.000+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [config.transactions] not found."},"stats":{},"cmd":{"aggregate":"transactions","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"config"}}}
{"t":{"$date":"2024-11-14T11:35:16.001+00:00"},"s":"W", "c":"QUERY", "id":23799, "ctx":"ftdc","msg":"Aggregate command executor error","attr":{"error":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [config.image_collection] not found."},"stats":{},"cmd":{"aggregate":"image_collection","cursor":{},"pipeline":[{"$collStats":{"storageStats":{"waitForLock":false,"numericOnly":true}}}],"$db":"config"}}}
{"t":{"$date":"2024-11-14T11:35:16.018+00:00"},"s":"I", "c":"CONNPOOL", "id":22576, "ctx":"ReplNetwork","msg":"Connecting","attr":{"hostAndPort":"Ubuntu-2204-jammy-amd64-base:27017"}}
主服务器上的 rs.config()
{
set: 'rs0',
date: ISODate('2024-11-14T11:36:01.498Z'),
myState: 1,
term: Long('3'),
syncSourceHost: '',
syncSourceId: -1,
heartbeatIntervalMillis: Long('2000'),
majorityVoteCount: 1,
writeMajorityCount: 1,
votingMembersCount: 1,
writableVotingMembersCount: 1,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1731584161, i: 5 }), t: Long('3') },
lastCommittedWallTime: ISODate('2024-11-14T11:36:01.474Z'),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1731584161, i: 5 }), t: Long('3') },
appliedOpTime: { ts: Timestamp({ t: 1731584161, i: 5 }), t: Long('3') },
durableOpTime: { ts: Timestamp({ t: 1731584161, i: 5 }), t: Long('3') },
writtenOpTime: { ts: Timestamp({ t: 1731584161, i: 5 }), t: Long('3') },
lastAppliedWallTime: ISODate('2024-11-14T11:36:01.474Z'),
lastDurableWallTime: ISODate('2024-11-14T11:36:01.474Z'),
lastWrittenWallTime: ISODate('2024-11-14T11:36:01.474Z')
},
lastStableRecoveryTimestamp: Timestamp({ t: 1731584157, i: 2 }),
electionCandidateMetrics: {
lastElectionReason: 'electionTimeout',
lastElectionDate: ISODate('2024-11-14T09:11:55.719Z'),
electionTerm: Long('3'),
lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1731575489, i: 10 }), t: Long('2') },
lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1731575489, i: 10 }), t: Long('2') },
numVotesNeeded: 1,
priorityAtElection: 1,
electionTimeoutMillis: Long('10000'),
numCatchUpOps: Long('0'),
newTermStartDate: ISODate('2024-11-14T09:11:55.790Z'),
wMajorityWriteAvailabilityDate: ISODate('2024-11-14T09:11:55.803Z')
},
members: [
{
_id: 0,
name: 'Ubuntu-2204-jammy-amd64-base:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 8656,
optime: { ts: Timestamp({ t: 1731584161, i: 5 }), t: Long('3') },
optimeDate: ISODate('2024-11-14T11:36:01.000Z'),
optimeWritten: { ts: Timestamp({ t: 1731584161, i: 5 }), t: Long('3') },
optimeWrittenDate: ISODate('2024-11-14T11:36:01.000Z'),
lastAppliedWallTime: ISODate('2024-11-14T11:36:01.474Z'),
lastDurableWallTime: ISODate('2024-11-14T11:36:01.474Z'),
lastWrittenWallTime: ISODate('2024-11-14T11:36:01.474Z'),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1731575515, i: 1 }),
electionDate: ISODate('2024-11-14T09:11:55.000Z'),
configVersion: 10,
configTerm: 3,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: 'XXXXXXXXXXXXXXX:27017',
health: 1,
state: 0,
stateStr: 'STARTUP',
uptime: 1925,
optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },
optimeDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),
optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),
lastAppliedWallTime: ISODate('1970-01-01T00:00:00.000Z'),
lastDurableWallTime: ISODate('1970-01-01T00:00:00.000Z'),
lastWrittenWallTime: ISODate('1970-01-01T00:00:00.000Z'),
lastHeartbeat: ISODate('2024-11-14T11:36:00.038Z'),
lastHeartbeatRecv: ISODate('1970-01-01T00:00:00.000Z'),
pingMs: Long('9'),
lastHeartbeatMessage: '',
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
configVersion: -2,
configTerm: -1
}
],
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1731584161, i: 5 }),
signature: {
hash: Binary.createFromBase64('XXXXXXXXXX', 0),
keyId: Long('XXXXXXXXXXXX')
}
},
operationTime: Timestamp({ t: 1731584161, i: 5 })
}
辅助服务器上的 rs.config()
MongoServerError[NotYetInitialized]: no replset config has been received
这篇文章延续了表分区设计的建议。
我尝试过截断旧分区并合并它。但我收到错误“为表‘dbo.STYTOTAL_RAW’指定的分区号 8 无效,分区号范围为 1 到 6。”看起来一旦合并,while 循环对于最大分区号就变得无效。
为了测试我使用了 3 天,但最初的要求是删除超过 90 天的分区。
按照@DanGuzman 的说法,也添加了 PF。完整脚本可用。在我的第一个帖子中。我也已将其链接到此帖子的顶部。
--Create function, copy from your script
CREATE PARTITION FUNCTION PF_myDateRange ( [datetime2](7))
AS RANGE RIGHT FOR VALUES
(
'2024-06-01 23:59:59.9999999',
'2024-07-01 23:59:59.9999999',
'2024-08-01 23:59:59.9999999',
'2024-09-01 23:59:59.9999999',
'2024-10-01 23:59:59.9999999',
'2024-10-21 23:59:59.9999999' -- take the max day and round with 12 AM ex: 2024-10-21 00:00:00.0000000
)
GO
use DB_Partition
--Invalid partition number 8 specified for table 'dbo.STYTOTAL_RAW', partition number can range from 1 to 6.
-- We can loop through the partition number directly from the table
--TRUNCATE TABLE [STYTOTAL_RAW] WITH (PARTITIONS (4));
declare @cmd_1 nvarchar(max)
declare @cmd_2 nvarchar(max)
DECLARE @partition_no bigint
DECLARE @PartitionFunction_name nvarchar(128)
DECLARE @PartitionFunction_Upper_value datetime2(7)
DECLARE @minrow int
DECLARE @maxrow int
select @minrow = MIN(p.partition_number), @maxrow = MAX(p.partition_number)
from sys.indexes i
join sys.partitions p ON i.object_id=p.object_id AND i.index_id=p.index_id
join sys.partition_schemes ps on ps.data_space_id = i.data_space_id
join sys.partition_functions pf on pf.function_id = ps.function_id
left join sys.partition_range_values rv on rv.function_id = pf.function_id AND rv.boundary_id = p.partition_number
join sys.allocation_units au ON au.container_id = p.hobt_id
join sys.filegroups fg ON fg.data_space_id = au.data_space_id
where i.object_id = object_id('STYTOTAL_RAW')
and rv.value < DATEADD(DAY, -3, SYSDATETIME())
select @minrow,@maxrow
while (@minrow <=@maxrow)
begin
select @partition_no=partition_number,@PartitionFunction_name=pf.name,@PartitionFunction_Upper_value=cast(rv.value as datetime2(7))
from sys.indexes i
join sys.partitions p ON i.object_id=p.object_id AND i.index_id=p.index_id
join sys.partition_schemes ps on ps.data_space_id = i.data_space_id
join sys.partition_functions pf on pf.function_id = ps.function_id
left join sys.partition_range_values rv on rv.function_id = pf.function_id AND rv.boundary_id = p.partition_number
join sys.allocation_units au ON au.container_id = p.hobt_id
join sys.filegroups fg ON fg.data_space_id = au.data_space_id
where i.object_id = object_id('STYTOTAL_RAW')
and rv.value < DATEADD(DAY, -3, SYSDATETIME())
and p.partition_number = @minrow
SET @cmd_1 = N'TRUNCATE TABLE dbo.STYTOTAL_RAW WITH (PARTITIONS (' + convert(NVARCHAR(128),@partition_no) + N'));'
print @cmd_1
--EXEC sys.sp_executesql @cmd_1
SET @cmd_2 = N'ALTER PARTITION FUNCTION ['+ @PartitionFunction_name+ '] () merge range ('''+convert (NVARCHAR(128), @PartitionFunction_Upper_value) +''');'
print @cmd_2
--EXEC sys.sp_executesql @cmd_2
set @minrow =@minrow +1
end
请提出任何建议。感谢您的帮助。
对于 Rocky Linux 上的 MySQL 8.0.40,我的数据库目前使用 keyring 插件。我想迁移到 keyring 组件。它显示
[错误] [MY-013106] [服务器] 无法执行密钥环迁移:无法初始化目标密钥环。
我利用以下资源进行工作:
- https://dev.mysql.com/doc/refman/8.0/en/keyring-key-migration.html
- https://blogs.oracle.com/mysql/post/keyring-components
- https://bugs.mysql.com/bug.php?id=108197
我看到有人成功关闭了加密,但我想避免这种情况。
我创建了清单:/usr/sbin/mysqld.my。它包含:
{
"components": "file://component_keyring_file"
}
我检查了组件文件是否存在:
ls /usr/lib64/mysql/plugin/component_keyring_file.so #success
我创建了密钥环配置文件:**/usr/lib64/mysql/plugin/component_keyring_file.cnf**
。其内容如下:
{
"path": "/var/lib/mysql/mysql-keyring/component_keyring_file",
"read_only": false
}
我为密钥环文件创建了目录:
mkdir /var/lib/mysql/mysql-keyring
chown mysql:mysql /var/lib/mysql/mysql-keyring
之后,我停止了 mysqld 并尝试运行迁移服务器。我尝试了以下变化:
mysqld --user=mysql --defaults-file=/etc/my.cnf --keyring-migration-to-component --keyring-migration-source=keyring_file.so --keyring-migration-destination=component_keyring_file.so
mysqld --user=mysql --keyring-migration-to-component --keyring-migration-source=keyring_file.so --keyring-migration-destination=component_keyring_file.so
我理解迁移用户不能是root,所以用户是mysql。另外component_keyring_file不能在数据目录中。我尝试了两种路径:
/var/lib/mysql/mysql-keyring # dir for mysql files
/var/lib/mysql-keyring
我更新了 component_keyring_file.cnf 并确保 mysql 用户拥有该目录。
安装 SQL Server 2019 时,也会同时安装 ODBC 驱动程序,而 MSSQL 似乎对其有很强的依赖性。是否可以重新配置 MSSQL 以使用较新版本的 ODBC 驱动程序并删除旧版本?我们的第三方漏洞管理扫描程序正在标记较旧版本的 ODBC 驱动程序,但似乎没有办法彻底修复这些发现。
截至本文撰写时的详细信息:
- 我正在使用的服务器正在运行 SQL Server 2019 开发人员版本
- 它在 Windows Server 2019 标准版 VM 上运行
- 此 SQL 服务器的版本为 15.0.4405.4 (CU29)
- 在安装 CU29 之前,此服务器位于 CU27
- 安装的 ODBC 版本是 17.10.6.1
- 所有组件都是 x64,而不是 x86
故障排除/分析:
- 从 CU27 更新到 CU29 并未更新 ODBC 驱动程序的版本
- 我可以手动安装较新版本的 ODBC 驱动程序(18.4.1.1),它将与 17.10.6.1 和平共处,但不会清除漏洞
- 如果我删除 17.10.6.1 来清除漏洞,SQL Server Agent 将无法启动(事件查看器:服务未及时响应启动或控制请求)
- 重新安装 ODBC 17.10.6.1 可以解决问题并允许代理启动 - 但漏洞再次出现
- 在安装 v18.x 驱动程序后修复 SQL Server 只会将 ODBC 驱动程序重新安装到较早的主要版本 (17.xxx),这同样不合规,并且仍然会触发我们的漏洞扫描结果
- 目前似乎还没有 ODBC 17 的新版本,因为 MS 似乎没有同时开发和发布 17.x 和 18.x 的新版本。他们似乎在催促我们使用 v18,但它与 SQL 2019 Agent 配合得并不好。
要点:
- 看来 MSSQL 2019 对 SQL 2019 的 ODBC 驱动程序版本 17 有硬依赖性
- CU 似乎尚未更新这些驱动程序 - 如果有的话
- 似乎没有办法通过 SSMS/UI 重新配置/调整 MSSQL 2019 以使用较新的 ODBC 驱动程序主要版本(例如 18.x)
- 我尚未调查过 Windows 系统注册表编辑
- 无论我在实例构建时安装了哪些组件,这些驱动程序都会被安装,因此最小化我的表面积似乎没有帮助(我将此实例与执行了最小安装的另一个 2019 实例进行了比较 - 它们都有相同的 ODBC 驱动程序。)
问题:
- 这里有没有人遇到过同样的问题?我可以采取什么步骤让 MSSQL 看到/使用较新的 ODBC 驱动程序?
- ODBC 驱动程序更新会随 CU 一起推送吗,还是我需要通过官方 MS 下载手动更新它们?
任何建议/指导都值得赞赏。谢谢。
我需要唯一值的数量,并尝试使用 Oracle 的COUNTDISTINCT()函数来获取:
select COUNTDISTINCT(a.m_label)
from user_rep a, user_group_rep b, trn_grp_rep c
where b.m_user_id = a.m_reference
...
这导致ORA-00904: "COUNTDISTINCT": invalid identifier
...使用普通的COUNT()
作品,但没有返回正确的结果。
我做错了什么?有什么有效的解决方法吗?
我知道 Liberatii 软件,它是一种网关,允许您在后台使用 PostgreSQL(甚至其他引擎),而无需重写为 Oracle 编写的应用程序。你们当中有人知道是否有像 Liberatii 一样在后台用 PostgreSQL 模拟 SQL Server 的软件吗?我们尝试找到性能问题的解决方案,其中一种可能性是切换到 Oracle 或 Postgres 引擎,但应用程序模块包含大量需要重写的自定义代码,我评估了所有可能的解决方案。
PostgreSQL
您可以创建一个包含 4 个分区的表 ( hash
),并将每个分区定位在不同的服务器(远程 PostgreSQL 服务器)上,它被称为 ( )。现在在 MySQL中Foreign Data Wrapper
,我听说有一个联合引擎,您可以使用它连接到远程服务器,但它只适用于表,而不是它的分区。我知道我可以使用NDB
,但我想拥有多个 MySQL 服务器并自行管理分布的数据(基于表分区)。MySQL 中是否有任何解决方案可以创建一个表 ( Innodb
) 并将其分区分布在多个远程 MySQL 服务器上?