这是我的login.sql
SET sqlprompt "_user'@'_connect_identifier > "
SET sqlformat ansiconsole
SET serveroutput on
SET lines 3000
如何将sqlprompt
Prod 设置为红色(测试设置为绿色)?
这是我的login.sql
SET sqlprompt "_user'@'_connect_identifier > "
SET sqlformat ansiconsole
SET serveroutput on
SET lines 3000
如何将sqlprompt
Prod 设置为红色(测试设置为绿色)?
版本
sql -version
SQLcl: Release 18.4.0.0 Production
我输入这个:
SQL> exec dbaspace.long_ops;
SID % Done Start Time Rem [s] Elapsed Message
==== ======= =================== ======= ======= ======================================================================================
There are currently no long running operations.
PL/SQL procedure successfully completed.
我按 ARROW UP 键获取最后一个命令,SQLcl 修改我的历史记录,如下所示:
SQL> BEGIN dbaspace.long_ops; END;;
Error starting at line : 1 in command -
BEGIN dbss.long_ops; END;;
Error report -
ORA-06550: line 1, column 26:
PLS-00103: Encountered the symbol ";"
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:
重写的历史命令不起作用
这是我们的经典tnsnames.ora
test1=
(DESCRIPTION=
(CONNECT_TIMEOUT=4)
(TRANSPORT_CONNECT_TIMEOUT=3)
(ENABLE=BROKEN)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=example1.example.com)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=example2.example.com)(PORT=1521))
)
(CONNECT_DATA=
(SERVER=DEDICATED)
(SERVICE_NAME=EXAMPLE.EXAMPLE.DBS)
)
)
我使用这个 SQLcl 命令:
sql -nohistory -noupdates -S $username/$password@$hostname:$port/$servicename @$filename
如何指定多个主机名?不止一个?它是某种主动被动集群 (Exadata)。
在第一个答案后编辑:
我在 shell 脚本中添加(在这个目录中tnsnames.ora
):
TNS_ADMIN=/example/example
我打电话$sqlcl -nohistory -noupdates -S $username/$password@"MY-DB" @$filename
回来错误:
./script.sh
USER = MY_USER
URL = jdbc:oracle:thin:@MY-DB
Error Message = IO Error: Unknown host specified
USER = MY_USER
URL = jdbc:oracle:thin:@MY-DB:1521/MY-DB
Error Message = IO Error: Invalid connection string format, a valid format is: "host:port:sid"
我为 SQL Server Management Studio 找到了一个类似的问题。但我在 macOS 上使用最新的 Azure Data Studio。我们使用本地 SQL Server(不是 Azure 云)。
如何使用 Azure Data Studio 从现有数据库生成 ERD?
我很惊讶它xtrabackup
支持增量备份。xtrabackup
复制数据文件并了解数据文件的格式(可以看看里面)?什么是假人 LSN?我发现What is log sequence number ? 它是如何在MySQL中使用的?.
来自增量备份的引用
增量备份复制其 LSN 比先前增量或完整备份的 LSN 更新的每个页面。有两种算法用于查找要复制的此类页面集。第一个适用于所有服务器类型和版本,是通过读取所有数据页直接检查页面 LSN。Percona Server 提供的第二个功能是在服务器上启用更改页面跟踪功能,该功能会在页面更改时记录下来。然后,此信息将写入一个紧凑的独立所谓位图文件中。xtrabackup 二进制文件将使用该文件只读取增量备份所需的数据页,从而可能节省许多读取请求。如果 xtrabackup 二进制文件找到位图文件,则默认情况下启用后一种算法。
我仍然不明白这是如何工作的。你使用xtrabackup
增量备份吗?你推荐xtrabackup
增量备份吗?
询问:
SELECT Concat(t.table_schema, '.', t.table_name),
t.table_rows,
snu.non_unique,
smax.cardinality,
( t.table_rows / Ifnull(smax.cardinality, 1) ) AS
"medium distribution",
t.table_rows * ( t.table_rows / Ifnull(smax.cardinality, 1) ) AS
"replication row reads"
FROM information_schema.tables t
LEFT JOIN (SELECT table_schema,
table_name,
Max(cardinality) cardinality
FROM information_schema.statistics
GROUP BY table_schema,
table_name) AS smax
ON t.table_schema = smax.table_schema
AND t.table_name = smax.table_name
LEFT JOIN (SELECT table_schema,
table_name,
Min(non_unique) non_unique
FROM information_schema.statistics
GROUP BY table_schema,
table_name) AS snu
ON t.table_schema = snu.table_schema
AND t.table_name = snu.table_name
WHERE t.table_rows > 0
AND t.table_schema <> 'information_schema'
AND t.table_schema <> 'performance_schema'
AND t.table_schema <> 'mysql'
AND ( snu.non_unique IS NULL
OR snu.non_unique = 1 )
AND ( ( t.table_rows / Ifnull(smax.cardinality, 1) ) > 1.99 )
AND t.table_rows * ( t.table_rows / Ifnull(smax.cardinality, 1) ) >
100000
ORDER BY t.table_rows * ( t.table_rows / Ifnull(smax.cardinality, 1) ) DESC;
版本:
(none)> show variables like '%version%';
+-------------------------+---------------------------+
| Variable_name | Value |
+-------------------------+---------------------------+
| innodb_version | 5.6.36-82.1 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 10.1.26-MariaDB |
| version_comment | Source distribution |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
| version_malloc_library | system |
| version_ssl_library | OpenSSL 1.0.1f 6 Jan 2014 |
| wsrep_patch_version | wsrep_25.19 |
+-------------------------+---------------------------+
10 rows in set
Time: 0.010s
解释:
+----+-------------+------------+------+---------------+--------+---------+-------------------------------------------------------------------+--------+----------+--------------------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------+---------------+--------+---------+-------------------------------------------------------------------+--------+----------+--------------------------------------------------------------------------------------+
| 1 | PRIMARY | t | ALL | <null> | <null> | <null> | <null> | <null> | <null> | Using where; Open_full_table; Scanned all databases; Using temporary; Using filesort |
| 1 | PRIMARY | <derived2> | ref | key0 | key0 | 390 | information_schema.t.TABLE_SCHEMA,information_schema.t.TABLE_NAME | 2 | 100.0 | Using where |
| 1 | PRIMARY | <derived3> | ref | key0 | key0 | 390 | information_schema.t.TABLE_SCHEMA,information_schema.t.TABLE_NAME | 2 | 100.0 | Using where |
| 3 | DERIVED | statistics | ALL | <null> | <null> | <null> | <null> | <null> | <null> | Open_frm_only; Scanned all databases; Using temporary; Using filesort |
| 2 | DERIVED | statistics | ALL | <null> | <null> | <null> | <null> | <null> | <null> | Open_full_table; Scanned all databases; Using temporary; Using filesort |
+----+-------------+------------+------+---------------+--------+---------+-------------------------------------------------------------------+--------+----------+--------------------------------------------------------------------------------------+
5 rows in set
Time: 0.022s
数数:
> select count('A') from information_schema.tables;
+------------+
| count('A') |
+------------+
| 7846 |
+------------+
1 row in set
Time: 0.069s
看起来无证件Open_full_table; Scanned all databases;
需要那么长时间?如何优化此查询或此持续时间在繁忙的服务器上是否正常?
我在 MariaDB 10.3.x 中看到了一个新功能Invisible Columns 。DBA 和 Web 开发人员的实际用例是什么?何时使用此功能?
可以在or语句中为列赋予
INVISIBLE
属性。然后这些列将不会在语句的结果中列出,也不需要在 语句中为它们分配值,除非通过名称明确提及它们。CREATE TABLE
ALTER TABLE
SELECT *
INSERT
INSERT
由于
SELECT *
不返回不可见列,因此以这种方式创建的新表或视图将没有不可见列的踪迹。如果在语句中特别引用SELECT
,列将被带入视图/新表,但INVISIBLE
属性不会。不可见列可以声明为
NOT NULL
,但需要一个DEFAULT
值
我发现一个人在 Galera/XtraDB 错误报告中谈论MDL 语义。
什么是 MDL?术语 MDL 也在 Galera 日志MDL conflict
中。请详细说明。
2018-06-14 17:07:10 140112699321088 [Note] WSREP: cluster conflict due to certification failure for threads:
2018-06-14 17:07:10 140112699321088 [Note] WSREP: Victim thread:
2018-06-14 17:07:10 140112690834176 [Note] WSREP: cluster conflict due to high priority abort for threads:
2018-06-14 17:07:10 140112690834176 [Note] WSREP: Winning thread:
2018-06-14 17:07:10 140112690834176 [Note] WSREP: Victim thread:
2018-06-14 17:07:10 140112690834176 [Note] WSREP: MDL conflict db=APC_MYSQLMON_DB table=TABLE1 ticket=4 solved by abort
2018-06-14 17:07:10 140112695683840 [Note] WSREP: MDL conflict db=APC_MYSQLMON_DB table=TABLE1 ticket=8 solved by abort
2018-06-14 17:07:10 140112695683840 [Note] WSREP: cluster conflict due to certification failure for threads:
2018-06-14 17:07:10 140112695683840 [Note] WSREP: Victim thread:
2018-06-14 17:26:47 140112698108672 [Note] WSREP: cluster conflict due to certification failure for threads:
2018-06-14 17:26:47 140112698108672 [Note] WSREP: Victim thread:
2018-06-14 17:34:48 140087340817152 [Note] WSREP: cluster conflict due to certification failure for threads:
2018-06-14 17:34:48 140087340817152 [Note] WSREP: Victim thread:
2018-06-14 17:36:48 140076554697472 [Note] WSREP: cluster conflict due to high priority abort for threads:
2018-06-14 17:36:48 140076554697472 [Note] WSREP: Winning thread:
2018-06-14 17:36:48 140076554697472 [Note] WSREP: Victim thread:
2018-06-14 17:36:48 140076554697472 [Note] WSREP: MDL conflict db=APC_MYSQLMON_DB table=TABLE1 ticket=4 solved by abort
2018-06-14 17:36:48 140087340817152 [Note] WSREP: MDL conflict db=APC_MYSQLMON_DB table=TABLE1 ticket=8 solved by abort
2018-06-14 17:36:48 140087340817152 [Note] WSREP: cluster conflict due to certification failure for threads:
2018-06-14 17:36:48 140087340817152 [Note] WSREP: Victim thread:
2018-06-14 17:37:50 139917573339904 [Note] WSREP: cluster conflict due to certification failure for threads:
2018-06-14 17:37:50 139917573339904 [Note] WSREP: Victim thread:
2018-06-14 17:05:12 139927950939904 [Warning] WSREP: Failed to report last committed 1941199495, -4 (Interrupted system call)
2018-06-14 17:09:41 139900840323840 [Note] WSREP: cluster conflict due to high priority abort for threads:
2018-06-14 17:09:41 139900840323840 [Note] WSREP: Winning thread:
2018-06-14 17:09:41 139900840323840 [Note] WSREP: Victim thread:
我们的 VM 配置(托管在 Vmware 上)
# cat /proc/cpuinfo |grep "cpu cores" | awk -F: '{ num+=$2 } END{ print "cpu cores", num }'
cpu cores 32
# free -h
total used free shared buffers cached
Mem: 62G 23G 39G 500K 349M 10G
-/+ buffers/cache: 12G 50G
Swap: 50G 0B 50G
我从 pt-variable-advisor 得到了一个警告max_connections
:
pt-variable-advisor h=localhost,u=root,p=Quule0juqu7aifohvo2Ahratit --socket /var/vcap/sys/run/mysql/mysqld.sock
(...)
# WARN max_connections: If the server ever really has more than a thousand threads running, then the system is likely to spend more time scheduling threads than really doing useful work.
(...)
为什么?任何细节?
my.cnf中的配置
max_connections = 15360
prd 数据库集群的设置(MariaDB 10.1.x 和 Galera)
MariaDB [(none)]> SHOW STATUS WHERE `variable_name` = 'Threads_connected';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 718 |
+-------------------+-------+
1 row in set (0.01 sec)
MariaDB [(none)]> SHOW STATUS WHERE `variable_name` = 'Max_used_connections';
+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| Max_used_connections | 924 |
+----------------------+-------+
1 row in set (0.02 sec)
默认值为 151 以提高性能
允许的连接数由 max_connections 系统变量控制。当 MySQL 与 Apache Web 服务器一起使用时,默认值为 151 以提高性能。(以前,默认值为 100。)如果您需要支持更多的连接,您应该为此变量设置一个更大的值。
和
MySQL 支持的最大连接数取决于给定平台上线程库的质量、可用 RAM 量、每个连接使用多少 RAM、每个连接的工作负载以及所需的响应时间。Linux 或 Solaris 通常应该能够支持至少 500 到 1000 个同时连接和多达 10,000 个连接
我们目前有 460 个用户,每个用户可以做 100 个max_connections
。这将是最大值。每个用户和数据库100 是不是max_connections
太高了?使用现代连接池,我们可以将其设置为 20?我们应该如何配置它而不会使我们的服务器因上下文切换而过载?一个网络应用程序是否有可能使用一个连接(同一连接上的每个语句)?
我运行并得到了关于和pt-variable-advisor
的不同设置的注释。max_heap_table_size
tmp_table_size
通过网络搜索,我只找到了旧文章(2007 年左右)。
pt-variable-advisor h=localhost,u=root,p=Quule0juqu7aifohvo2Ahratit --socket /var/vcap/sys/run/mysql/mysqld.sock
(...)
# NOTE tmp_table_size: The effective minimum size of in-memory implicit temporary tables used internally during query execution is min(tmp_table_size, max_heap_table_size), so max_heap_table_size should be at least as large as tmp_table_size.
(...)
我们的配置
max_heap_table_size = 16777216
tmp_table_size = 33554432
我们没有修改cf-mysql-release的默认值。我看到 MariaDB KB 推荐了其他默认值。
cf_mysql.mysql.tmp_table_size:
description: 'The maximum size (in bytes) of internal in-memory temporary tables'
default: 33554432
cf_mysql.mysql.max_heap_table_size:
description: 'The maximum size (in rows) to which user-created MEMORY tables are permitted to grow'
default: 16777216
我还找到了优化 MySQL tmp_table_size并检查了我们的值和配置:
MariaDB [(none)]> show global status like 'created_tmp_disk_tables';
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Created_tmp_disk_tables | 12727901 |
+-------------------------+----------+
1 row in set (0.01 sec)
MariaDB [(none)]> show global status like 'created_tmp_tables';
+--------------------+-----------+
| Variable_name | Value |
+--------------------+-----------+
| Created_tmp_tables | 115714303 |
+--------------------+-----------+
1 row in set (0.01 sec)
MariaDB [(none)]> select (12727901*100)/(115714303 + 12727901) as "Created disk tmp tables ratio" from dual;
+-------------------------------+
| Created disk tmp tables ratio |
+-------------------------------+
| 9.9094 |
+-------------------------------+
1 row in set (0.00 sec)
我们的(默认)配置有问题吗?我们不知道我们的工作量。我们为具有不同使用模式的小型 Web 应用程序运行了大约 500 个小型数据库。
我在 MacBook 上安装的 MariaDB 和 Percona Toolkit 版本:
brew info percona-toolkit
percona-toolkit: stable 3.0.10 (bottled), HEAD
Percona Toolkit for MySQL
https://www.percona.com/software/percona-toolkit/
/usr/local/Cellar/percona-toolkit/3.0.10 (244 files, 8.4MB) *
Poured from bottle on 2018-05-31 at 09:52:48
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/percona-toolkit.rb
==> Dependencies
Required: mysql ✔, openssl ✔
==> Options
--HEAD
Install HEAD version
数据库服务器版本:
show global variables like '%version%';
+-------------------------+---------------------------+
| Variable_name | Value |
+-------------------------+---------------------------+
| innodb_version | 5.6.36-82.1 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 10.1.26-MariaDB |
| version_comment | Source distribution |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
| version_malloc_library | system |
| version_ssl_library | OpenSSL 1.0.1f 6 Jan 2014 |
| wsrep_patch_version | wsrep_25.19 |
+-------------------------+---------------------------+
10 rows in set (0.01 sec)
有趣的是,为了安装 percona-toolkit,我必须安装 Oracle MySQL,然后改回 MariaDB
brew install mariadb
brew unlink mariadb
brew install percona-toolkit
brew unlink mysql
brew link mariadb
我们将 Galera 集群与基于行的复制一起使用:
show global variables like 'binlog_format';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| binlog_format | ROW |
+---------------+-------+
1 row in set (0.00 sec)
我做了一个简单的用例:
mysqlbinlog mysql-bin.0013* > all.sql
pt-query-digest --type binlog all.sql
all.sql: 1% 37:37 remain
(...)
all.sql: 96% 01:23 remain
all.sql: 98% 00:30 remain
# 2417.5s user time, 51.4s system time, 89.16M rss, 4.24G vsz
# Current date: Fri Jun 1 07:42:57 2018
# Hostname: aukVivi0009
# Files: all.sql
# Overall: 0 total, 2.05k unique, 0 QPS, 0x concurrency __________________
# Time range: 2018-05-26 02:00:54 to 2018-05-31 08:05:28
# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======
# Query size 10.68G 6 287.51k 492.88 833.10 1.92k 107.34
# Profile
# Rank Query ID Response time Calls R/Call V/M Item
# =========== =========== =========== =========== =========== ===== ======
为什么输出为空?与 MariaDB不pt-query-digest
兼容?binlog格式的变化?任何解决方法?
在 SST 期间,我的数据目录看起来像这样(有一个隐藏的目录.sst
)xtrabackup
:
-rw-rw---- 1 vcap vcap 113 Apr 16 08:13 grastate.dat
-rw-rw---- 1 vcap vcap 265 Apr 16 08:13 gvwstate.dat
-rw-rw---- 1 vcap vcap 0 Apr 16 08:13 sst_in_progress
-rw------- 1 vcap vcap 536872232 Apr 16 08:29 galera.cache
-rw------- 1 vcap vcap 134217728 Apr 16 08:32 gcache.page.000000
-rw------- 1 vcap vcap 134217728 Apr 16 08:36 gcache.page.000001
-rw------- 1 vcap vcap 134217728 Apr 16 08:40 gcache.page.000002
-rw------- 1 vcap vcap 134217728 Apr 16 08:44 gcache.page.000003
-rw------- 1 vcap vcap 134217728 Apr 16 08:48 gcache.page.000004
-rw------- 1 vcap vcap 134217728 Apr 16 08:52 gcache.page.000005
-rw------- 1 vcap vcap 134217728 Apr 16 08:56 gcache.page.000006
-rw------- 1 vcap vcap 134217728 Apr 16 09:00 gcache.page.000007
-rw------- 1 vcap vcap 134217728 Apr 16 09:03 gcache.page.000008
-rw------- 1 vcap vcap 134217728 Apr 16 09:07 gcache.page.000009
-rw------- 1 vcap vcap 134217728 Apr 16 09:11 gcache.page.000010
-rw------- 1 vcap vcap 134217728 Apr 16 09:15 gcache.page.000011
-rw------- 1 vcap vcap 134217728 Apr 16 09:19 gcache.page.000012
-rw-r----- 1 vcap vcap 366 Apr 16 09:20 mysql-bin.000021
-rw-rw---- 1 vcap vcap 19 Apr 16 09:20 mysql-bin.index
-rw------- 1 vcap vcap 134217728 Apr 16 09:23 gcache.page.000013
-rw------- 1 vcap vcap 134217728 Apr 16 09:27 gcache.page.000014
-rw------- 1 vcap vcap 134217728 Apr 16 09:30 gcache.page.000015
-rw------- 1 vcap vcap 134217728 Apr 16 09:32 gcache.page.000016
据我了解,SST 期间的写入存储在gcache.page.0000*
文件中。这个对吗?我们将大小设置为最大。512 MB ( gcache.size
)。如果我们达到最大值会怎样?如果在 SST 期间插入 2 GB?
2018-04-16 8:29:00 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000000 of size 134217728 bytes
2018-04-16 8:32:54 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000001 of size 134217728 bytes
2018-04-16 8:36:48 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000002 of size 134217728 bytes
2018-04-16 8:40:41 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000003 of size 134217728 bytes
2018-04-16 8:44:33 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000004 of size 134217728 bytes
2018-04-16 8:48:26 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000005 of size 134217728 bytes
2018-04-16 8:52:19 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000006 of size 134217728 bytes
2018-04-16 8:56:12 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000007 of size 134217728 bytes
2018-04-16 9:00:05 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000008 of size 134217728 bytes
2018-04-16 9:03:57 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000009 of size 134217728 bytes
2018-04-16 9:07:50 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000010 of size 134217728 bytes
2018-04-16 9:11:42 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000011 of size 134217728 bytes
2018-04-16 9:15:34 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000012 of size 134217728 bytes
2018-04-16 9:19:33 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000013 of size 134217728 bytes
2018-04-16 9:23:23 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000014 of size 134217728 bytes
2018-04-16 9:27:02 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000015 of size 134217728 bytes
2018-04-16 9:30:40 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000016 of size 134217728 bytes
2018-04-16 9:34:18 140448739440384 [Note] WSREP: Created page /var/vcap/store/mysql/gcache.page.000017 of size 134217728 bytes
2018-04-16 9:34:58 140449443399552 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2018-04-16 9:35:07 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000000
2018-04-16 9:35:08 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000001
2018-04-16 9:35:09 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000002
2018-04-16 9:35:11 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000003
2018-04-16 9:35:12 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000004
2018-04-16 9:35:13 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000005
2018-04-16 9:35:15 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000006
2018-04-16 9:35:16 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000007
2018-04-16 9:35:17 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000008
2018-04-16 9:35:19 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000009
2018-04-16 9:35:20 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000010
2018-04-16 9:35:21 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000011
2018-04-16 9:35:22 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000012
2018-04-16 9:35:32 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000013
2018-04-16 9:35:45 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000014
2018-04-16 9:35:59 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000015
2018-04-16 9:36:13 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000016
2018-04-16 9:36:16 140448383366912 [Note] WSREP: Deleted page /var/vcap/store/mysql/gcache.page.000017
2018-04-16 9:36:27 140436269124352 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2018-04-16 9:36:39 140588932401024 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2018-04-16 9:37:55 140579234867968 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
我在很多输出中SHOW FULL PROCESSLIST
只看到了COMMIT
or commit
(小写和大写混合)。这些交易是什么?为什么没有 SQL 语句?我们正在运行 MariaDB 10.1.x 和 Galera 复制(3 个节点)。
如何解释这些交易?
> select COMMAND,TIME,STATE,INFO,TIME_MS,STAGE,MAX_STAGE,PROGRESS,MEMORY_USED,EXAMINED_ROWS,QUERY_ID,INFO_BINARY,TID from INFORMATION_SCHEMA.PROCESSLIST where INFO like '%commit%';
+---------+------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+-------+-----------+----------+-------------+---------------+-----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
| COMMAND | TIME | STATE | INFO | TIME_MS | STAGE | MAX_STAGE | PROGRESS | MEMORY_USED | EXAMINED_ROWS | QUERY_ID | INFO_BINARY | TID |
+---------+------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+-------+-----------+----------+-------------+---------------+-----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
| Query | 1 | init | COMMIT | 1267.015 | 0 | 0 | 0.000 | 67544 | 0 | 483610134 | COMMIT | 12241 |
| Query | 112 | init | COMMIT | 112442.763 | 0 | 0 | 0.000 | 67544 | 0 | 483594429 | COMMIT | 12003 |
| Query | 151 | init | COMMIT | 151914.251 | 0 | 0 | 0.000 | 67544 | 0 | 483588122 | COMMIT | 11972 |
| Query | 156 | init | COMMIT | 156962.716 | 0 | 0 | 0.000 | 141368 | 0 | 483587455 | COMMIT | 11962 |
| Query | 156 | init | COMMIT | 156961.757 | 0 | 0 | 0.000 | 141368 | 0 | 483587456 | COMMIT | 11960 |
| Query | 182 | init | commit | 182230.206 | 0 | 0 | 0.000 | 67544 | 0 | 483584325 | commit | 11801 |
| Query | 229 | init | COMMIT | 229144.061 | 0 | 0 | 0.000 | 67544 | 0 | 483578193 | COMMIT | 11529 |
| Query | 0 | Filling schema table | select COMMAND,TIME,STATE,INFO,TIME_MS,STAGE,MAX_STAGE,PROGRESS,MEMORY_USED,EXAMINED_ROWS,QUERY_ID,INFO_BINARY,TID from INFORMATION_SCHEMA.PROCESSLIST where INFO like '%commit%' | 0.346 | 0 | 0 | 0.000 | 104808 | 0 | 483610236 | select COMMAND,TIME,STATE,INFO,TIME_MS,STAGE,MAX_STAGE,PROGRESS,MEMORY_USED,EXAMINED_ROWS,QUERY_ID,INFO_BINARY,TID from INFORMATION_SCHEMA.PROCESSLIST where INFO like '%commit%' | 11359 |
| Query | 66 | init | commit | 66835.790 | 0 | 0 | 0.000 | 67544 | 0 | 483601099 | commit | 10917 |
| Query | 353 | init | commit | 353104.108 | 0 | 0 | 0.000 | 67544 | 0 | 483561401 | commit | 10807 |
| Query | 494 | init | COMMIT | 494696.772 | 0 | 0 | 0.000 | 338232 | 0 | 483540392 | COMMIT | 9997 |
引自openark-kit。该软件似乎没有维护(Google 代码托管,最后一次提交是 2013 年)。
oak-show-limits:显示
AUTO_INCREMENT
“自由空间”。
这是什么?没有此工具如何查看AUTO_INCREMENT
“可用空间”?
我创建了一个在其自己的数据库中具有完全权限的用户 ( dbOwner
) 和对管理命令的只读访问权限 ( clusterMonitor
)
use customerdb
(mongod-3.4.7) customerdb> db.createUser( { user: "customer",
... pwd: "customerpw",
... roles: [ { role: "clusterMonitor", db: "admin" },
... { role: "dbOwner", db: "customerdb" }] },
... { w: "majority" , wtimeout: 5000 } )
Successfully added user: {
"user": "customer",
"roles": [
{
"role": "clusterMonitor",
"db": "admin"
},
{
"role": "dbOwner",
"db": "customerdb"
}
]
}
启用身份验证并使用新用户登录。这是 Homebrew 安装的最新版本的 MongoDB 单实例。
$ mongo -u customer -p customerpw localhost --authenticationDatabase=customerdb
为什么getRoles()
显示我enableSharding
的角色?我没有在文档中找到解释
> db.getRoles(
... {
... rolesInfo: 1,
... showPrivileges:false,
... showBuiltinRoles: true
... }
... )
[
{
"role": "dbAdmin",
"db": "customerdb",
"isBuiltin": true,
"roles": [ ],
"inheritedRoles": [ ]
},
{
"role": "dbOwner",
"db": "customerdb",
"isBuiltin": true,
"roles": [ ],
"inheritedRoles": [ ]
},
{
"role": "enableSharding",
"db": "customerdb",
"isBuiltin": true,
"roles": [ ],
"inheritedRoles": [ ]
},
{
"role": "read",
"db": "customerdb",
"isBuiltin": true,
"roles": [ ],
"inheritedRoles": [ ]
},
{
"role": "readWrite",
"db": "customerdb",
"isBuiltin": true,
"roles": [ ],
"inheritedRoles": [ ]
},
{
"role": "userAdmin",
"db": "customerdb",
"isBuiltin": true,
"roles": [ ],
"inheritedRoles": [ ]
}
]
enableSharding 角色的权限
{
"role": "enableSharding",
"db": "customerdb",
"isBuiltin": true,
"roles": [ ],
"inheritedRoles": [ ],
"privileges": [
{
"resource": {
"db": "",
"collection": ""
},
"actions": [
"enableSharding"
]
}
],
"inheritedPrivileges": [
{
"resource": {
"db": "",
"collection": ""
},
"actions": [
"enableSharding"
]
}
]
}
我在带有以下版本的 mongos 上的分片集群中对此进行了测试:
MongoDB Enterprise mongos> db.version()
3.2.11
以及在具有单个 mongod 和版本 3.4.7 的 MacBook 上
我想我在创建用户和授予角色方面做错了什么?
--no-autocommit
我在 MacBook(安装了 Homebrew 最新的 MariaDB)上测试了快速和肮脏与没有此选项的情况。我将一个 4.6 GB *.sql 文件恢复到一个空的 MariaDB。没有进行的转储--no-autocommit
需要 5m42.072 秒才能恢复,而使用--no-autocommit
它需要 1 分钟。
用于转储的其他选项是--max_allowed_packet=1G --flush-logs --single-transaction --all-databases
set autocommit=0;
INSERT INTO ...
UNLOCK TABLES;
commit;
何时使用--no-autocommit
?在什么用例中这个选项有意义?
mysqltuner.pl
给我看结果
[!!] 2 different collations for database ccdb
COLLATE
我在数据库级别上没有看到两个不同。这可能吗?如何验证?
> show create database ccdb;
+----------+---------------------------------------------------------------------------------------+
| Database | Create Database |
+----------+---------------------------------------------------------------------------------------+
| ccdb | CREATE DATABASE `ccdb` /*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci */ |
+----------+---------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
> select * from information_schema.schemata where schema_name = 'ccdb';
+--------------+-------------+----------------------------+------------------------+----------+
| CATALOG_NAME | SCHEMA_NAME | DEFAULT_CHARACTER_SET_NAME | DEFAULT_COLLATION_NAME | SQL_PATH |
+--------------+-------------+----------------------------+------------------------+----------+
| def | ccdb | utf8 | utf8_unicode_ci | NULL |
+--------------+-------------+----------------------------+------------------------+----------+
1 row in set (0.00 sec)
我看到了不同TABLE_COLLATION
。这是个问题吗?
> SELECT count(*), TABLE_COLLATION FROM information_schema.TABLES WHERE TABLE_SCHEMA='ccdb' group by TABLE_COLLATION;
+----------+-----------------+
| count(*) | TABLE_COLLATION |
+----------+-----------------+
| 29 | utf8_bin |
| 26 | utf8_general_ci |
+----------+-----------------+
2 rows in set (0.00 sec)
> select version();
+-----------------+
| version() |
+-----------------+
| 10.1.20-MariaDB |
+-----------------+
1 row in set (0.00 sec)
此查询仅扫描一份文档并仅返回一份文档。但这很慢:
2017-05-22T07:13:24.548+0000 I COMMAND [conn40] query databasename.collectionname query: { _id: ObjectId('576d4ce3f2d62a001e84a9b8') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8009ms
2017-05-22T07:13:24.549+0000 I COMMAND [conn10] query databasename.collectionname query: { _id: ObjectId('576d4db35de5fa001ebdd77a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8010ms
2017-05-22T07:13:24.553+0000 I COMMAND [conn47] query databasename.collectionname query: { _id: ObjectId('576d44b7ea8351001ea5fb22') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8014ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn52] query databasename.collectionname query: { _id: ObjectId('576d457ceb82a0001e205bfa') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn41] query databasename.collectionname query: { _id: ObjectId('576d457ec0697c001e1e1779') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn39] query databasename.collectionname query: { _id: ObjectId('576d44b8ea8351001ea5fb27') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.561+0000 I COMMAND [conn34] query databasename.collectionname query: { _id: ObjectId('576d44b55de5fa001ebdd31e') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8022ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn32] query databasename.collectionname query: { _id: ObjectId('576d4df6d738a7001ef2a235') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn51] query databasename.collectionname query: { _id: ObjectId('576d48165de5fa001ebdd55a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8024ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn17] query databasename.collectionname query: { _id: ObjectId('576d44c19f2382001e953717') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn8] query databasename.collectionname query: { _id: ObjectId('576d45d256c22e001efdb382') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn42] query databasename.collectionname query: { _id: ObjectId('576d44bd57c75e001e6e2302') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn6] query databasename.collectionname query: { _id: ObjectId('576d44b394e731001e7cd530') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.571+0000 I COMMAND [conn31] query databasename.collectionname query: { _id: ObjectId('576d4dbcb7289f001e64e32b') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8032ms
这看起来像非常慢的磁盘 I/O。是什么planSummary: IDHACK
意思?还有更多信息IDHACK
吗?
我需要使用 Ops Manager 测试恢复。为此,我“克隆”了生产分片集群。我创建了与生产规模相同的 VM 并执行mongodump/mongorestore
(Ops Manager 部署)。我的测试(用于恢复)不需要是一致的副本,对我来说,如果丢失大约 5 GB 就没问题。
DATA SIZE: 573.6 GB
shard0
142.6 GB
shard1
145.94 GB
shard2
142.55 GB
shard3
142.52 GB
为简单起见,我希望使用mongodump 并将其通过管道传输到mongos
.
我找到了一个旧文档 (v3.0) Backup a Small Sharded Cluster with mongodump。该文档在新的 MongoDB 版本中不再存在。
如果您的分片集群包含一个小数据集,您可以使用 mongodump 连接到 mongos。
GB中的小数据集是什么?见上文我的部署。
如果您在不指定数据库或集合的情况下使用 mongodump,mongodump 将从配置服务器捕获集合数据和集群元数据。
我不需要显式备份配置 RS?
将数据恢复到分片集群时,必须在从备份恢复数据之前部署和配置分片。有关详细信息,请参阅部署分片集群。
这意味着用简单的英语我需要shard key (and enable sharding)
在恢复之前定义?
我会错过任何步骤/重要的事情吗?