我最近读到有关 Percona Server 中的一个设置,称为max_binlog_files
限制 binlog 文件的总数。这正是我所需要的。我binlog_expire_logs_seconds
目前设置为 3 天,99% 的情况下效果都很好。然而,我目前正在进行的项目运行大量查询,并且二进制日志在短短几个小时内就消耗了超过 70GB。在这种情况下,我确实需要限制文件数量。
MySQL 是否有任何本机设置可以实现此目的?限制二进制日志消耗的总磁盘空间的最佳方法是什么?
我最近读到有关 Percona Server 中的一个设置,称为max_binlog_files
限制 binlog 文件的总数。这正是我所需要的。我binlog_expire_logs_seconds
目前设置为 3 天,99% 的情况下效果都很好。然而,我目前正在进行的项目运行大量查询,并且二进制日志在短短几个小时内就消耗了超过 70GB。在这种情况下,我确实需要限制文件数量。
MySQL 是否有任何本机设置可以实现此目的?限制二进制日志消耗的总磁盘空间的最佳方法是什么?
我想对表中的所有行进行排序,循环中每个类别一个。例如,给定以下数据:
+---------+-------------+
| item_id | category_id |
+---------+-------------+
| 4013738 | 1102 |
| 4016142 | 1102 |
| 4027380 | 1102 |
| 4029166 | 1014 |
| 4031335 | 1125 |
| 4031984 | 1014 |
| 4031986 | 1014 |
| 5034654 | 1123 |
| 5034656 | 1125 |
| 5034662 | 1125 |
| 5034735 | 1109 |
| 5034736 | 1109 |
| 5034737 | 1109 |
| 5040226 | 1123 |
| 5040227 | 1123 |
+---------+-------------+
所需的结果集应该是这样的:
+---------+-------------+
| item_id | category_id |
+---------+-------------+
| 4029166 | 1014 |
| 4013738 | 1102 |
| 5034735 | 1109 |
| 5034654 | 1123 |
| 4031335 | 1125 |
| 4031984 | 1014 |
| 4016142 | 1102 |
| 5034736 | 1109 |
| 5040226 | 1123 |
| 5034656 | 1125 |
| 4031986 | 1014 |
| 4027380 | 1102 |
| 5034737 | 1109 |
| 5040227 | 1123 |
| 5034662 | 1125 |
+---------+-------------+
item_id
还应订购连续类别。如果某些类别包含的项目比其他类别多,则循环应继续使用相同的逻辑处理剩下的任何行。
在像 php 这样的脚本语言中,这将是一项相当微不足道的任务,但我无法终生思考如何在 sql 中完成它。
这是一个带有示例数据的数据库小提琴:https ://www.db-fiddle.com/f/ioZzvoQfnSNiowe6Rp7QP5/0
在 MySQL 中,我正在尝试类似下面的查询,但没有得到想要的结果:
SELECT ROUND(t.price / t.qty, IF(qty > 1, 4, 2)) AS unit_cost
FROM (
SELECT 0.10 AS price, 1 AS qty
UNION
SELECT 2.60 AS price, 25 AS qty
) t
我想要的是:
+-----------+
| unit_cost |
+-----------+
| 0.10 |
| 0.1040 |
+-----------+
但由于某种原因,结果是:
+-----------+
| unit_cost |
+-----------+
| 0.100000 |
| 0.104000 |
+-----------+
奇怪的是,这有效:
SELECT ROUND(0.10000, IF (1=1, 2, 4));
所以我知道条件舍入是可能的。我怎样才能达到预期的结果集?
我正在使用 mysqldump 在完全由 InnoDB 表组成的模式上执行定期备份。mysqldump 文档有这样的说法--single-transaction
:
--single-transaction 选项和 --lock-tables 选项是互斥的,因为 LOCK TABLES 会导致任何未决事务被隐式提交。
然而,这里(以及堆栈交换网络上的其他地方)大约有 1 亿篇博客文章和答案推荐:
mysqldump --single-transaction --skip-lock-tables my_database > my_database.sql
如果这两个选项是互斥的,我认为指定--single-transaction
应该就足够了。然而,文档也说明了这一点--opt
:
此选项默认启用,是 --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick -- 组合的简写设置字符集。
这让我相信这--lock-tables
是默认开启的。
我是否需要同时指定--skip-lock-tables
或--single-transaction
仅设置后者以确保在转储期间不会锁定表?
为什么将此结果REGEXP_SUBSTR()
转换为 DECIMAL 失败?
SELECT
REGEXP_SUBSTR('Cost (-$14.18)', '(?<=Cost [(]-[$])[0-9.]+') AS _extracted,
CAST(REGEXP_SUBSTR('Cost (-$14.18)', '(?<=Cost [(]-[$])[0-9.]+') AS DECIMAL(8,2)) AS cost_1,
CAST((SELECT _extracted) AS DECIMAL(8,2)) AS cost_2,
CAST((SELECT _extracted) * 1 AS DECIMAL(8,2)) AS cost_3,
CAST('14.18' AS DECIMAL(8,2)) AS cost_4;
+------------+--------+--------+--------+--------+
| _extracted | cost_1 | cost_2 | cost_3 | cost_4 |
+------------+--------+--------+--------+--------+
| 14.18 | 14.00 | 14.00 | 14.18 | 14.18 |
+------------+--------+--------+--------+--------+
像 in 一样投射一个普通的字符串cost_4
似乎有效。REGEXP_SUBSTR()
将结果乘以1
似乎也有效。但只是像我所做的那样简单地转换结果cost_1
并且cost_2
无法生成正确的定点版本_extracted
.
奇怪的是,在我的应用程序中使用反向引用cost_2
实际上会产生正确的结果。无法在其他地方复制,但认为值得一提。
这将很难重现,但希望有人可以根据所涉及的逻辑对我的问题有所了解。在事务期间遇到一些间歇性死锁问题后,我实施了一个重试策略,类似于这样(PHP):
public function execute_query($sql) {
$try = 1;
$log_msg = '';
while (true) {
$query = $this->link->query($sql);
$error_no = $this->link->errno;
$error_msg = $this->link->error;
if (!$error_no){
if ($try > 1) {
$this->debug_log->write('Deadlock Succeeded after ' . $try . ' Tries ::: ' . $sql);
}
return $query;
} else {
$log_msg = 'Error: ' . $error_msg . ' ::: Error No: ' . $error_no . ' ::: ' . ($try > 1 ? $try . ' Tries ::: ' : '') . $sql;
$this->debug_log->write($log_msg);
if ($error_no == 1213 && $try < self::DEADLOCK_RETRY) {
// retry when deadlock occurs
sleep($try * 2);
$try++;
} else {
throw new ErrorException($log_msg);
exit();
}
}
}
}
这似乎产生了与我所希望的完全相反的影响——我的事务的前半部分似乎被回滚,但后半部分提交。
例如:
START TRANSACTION;
Query 1
Query 2
Query 3 (deadlock occurs, query gets retried and succeeds on 2nd attempt)
Query 4
Query 5
COMMIT;
在所有这一切结束时,我留下了查询 3、4 和 5 成功运行,但查询 1 和 2 没有。我不明白这是怎么可能的,但现在已经发生了两次,我无法重现死锁来测试或制定工作策略。
谁能解释为什么在 InnoDB 事务中重试失败的查询会导致一半事务回滚而另一半提交?
更新(tl;博士;):
我在这里提交了一个错误报告:https ://bugs.mysql.com/bug.php?id=99593 ,该报告已被确认并提供了解决方法。有关详细信息,请参阅下面的答案。
某些查询似乎在 MySQL 8.0.20 下苦苦挣扎,我想知道是否有人可以指出一些可能的解决方案。目前我的旧服务器启动并运行,仍然在 5.7.30 上,所以很容易 A/B 性能结果。两台服务器都有 32GB 的 RAM,几乎相同的配置,所有的表都是 InnoDB。以下是一些(相关)设置:
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT
innodb_file_per_table = 1
innodb_buffer_pool_instances = 12
innodb_buffer_pool_size = 16G
innodb_log_buffer_size = 256M
innodb_log_file_size = 1536M
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_io_capacity = 5000
innodb_thread_concurrency = 0
SELECT DISTINCT vehicle_id, submodel_id, store_id
FROM product_to_store pts
JOIN product_to_vehicle ptv USING (product_id)
WHERE vehicle_id != 0 AND pts.store_id = 21;
此查询产生以下解释:
MySQL 8.0.20(查询需要 24 秒):
+----+-------------+-------+------------+------+-------------------------------------------+--------------------------+---------+----------------+--------+----------+------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+------+-------------------------------------------+--------------------------+---------+----------------+--------+----------+------------------------------+
| 1 | SIMPLE | pts | NULL | ref | PRIMARY,product_id,store_id,store_product | store_id | 4 | const | 813308 | 100.00 | Using index; Using temporary |
| 1 | SIMPLE | ptv | NULL | ref | product_vehicle_submodel,vehicle_product | product_vehicle_submodel | 4 | pts.product_id | 53 | 50.00 | Using where; Using index |
+----+-------------+-------+------------+------+-------------------------------------------+--------------------------+---------+----------------+--------+----------+------------------------------+
MySQL 5.7.30(查询需要 12 秒):
+----+-------------+-------+------------+------+-------------------------------------------+--------------------------+---------+----------------+--------+----------+------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+------+-------------------------------------------+--------------------------+---------+----------------+--------+----------+------------------------------+
| 1 | SIMPLE | pts | NULL | ref | PRIMARY,product_id,store_id,store_product | store_product | 4 | const | 547242 | 100.00 | Using index; Using temporary |
| 1 | SIMPLE | ptv | NULL | ref | product_vehicle_submodel,vehicle_product | product_vehicle_submodel | 4 | pts.product_id | 22 | 50.00 | Using where; Using index |
+----+-------------+-------+------------+------+-------------------------------------------+--------------------------+---------+----------------+--------+----------+------------------------------+
有问题的两个表在两台服务器上都是相同的。在这种情况下,计划看起来有点不同,但我还有其他这样的:
SELECT DISTINCT vehicle_type_id, vehicle_type_name
FROM base_vehicle bv
INNER JOIN vehicle_type vt USING (vehicle_type_id);
这在两台服务器上产生相同的解释,但在 MySQL 5.7 上平均为 0.07 秒,在 MySQL 8 上平均为 0.30 秒,大约是 4 倍!
+----+-------------+-------+------------+-------+-----------------+-------------------+---------+--------------------+------+----------+------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+-------+-----------------+-------------------+---------+--------------------+------+----------+------------------------------+
| 1 | SIMPLE | vt | NULL | index | PRIMARY | vehicle_type_name | 194 | NULL | 11 | 100.00 | Using index; Using temporary |
| 1 | SIMPLE | bv | NULL | ref | vehicle_type_id | vehicle_type_id | 2 | vt.vehicle_type_id | 6428 | 100.00 | Using index |
+----+-------------+-------+------------+-------+-----------------+-------------------+---------+--------------------+------+----------+------------------------------+
在这一点上,我完全不知所措,希望有人能帮助说明升级后可能导致性能如此糟糕的原因。
更新:根据要求,以下是上述查询中涉及的表的表架构:
CREATE TABLE `product_to_store` (
`product_id` int NOT NULL,
`store_id` int NOT NULL,
PRIMARY KEY (`product_id`,`store_id`),
KEY `product_id` (`product_id`),
KEY `store_id` (`store_id`),
KEY `store_product` (`store_id`,`product_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `product_to_vehicle` (
`product_to_vehicle_id` int NOT NULL AUTO_INCREMENT,
`product_id` int NOT NULL,
`vehicle_id` mediumint NOT NULL DEFAULT '0',
`submodel_id` smallint NOT NULL DEFAULT '0',
PRIMARY KEY (`product_to_vehicle_id`),
KEY `submodel_id` (`submodel_id`),
KEY `product_vehicle_submodel` (`product_id`,`vehicle_id`,`submodel_id`),
KEY `vehicle_product` (`vehicle_id`,`product_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `vehicle_type` (
`vehicle_type_id` smallint NOT NULL AUTO_INCREMENT,
`vehicle_type_name` varchar(64) NOT NULL,
PRIMARY KEY (`vehicle_type_id`),
KEY `vehicle_type_name` (`vehicle_type_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `base_vehicle` (
`vehicle_id` mediumint NOT NULL AUTO_INCREMENT,
`year` smallint NOT NULL DEFAULT '0',
`make_id` smallint NOT NULL DEFAULT '0',
`model_id` mediumint NOT NULL DEFAULT '0',
`vehicle_type_id` smallint NOT NULL DEFAULT '0',
PRIMARY KEY (`vehicle_id`),
KEY `make_id` (`make_id`),
KEY `model_id` (`model_id`),
KEY `year_make` (`year`,`make_id`),
KEY `year_model` (`year`,`model_id`),
KEY `vehicle_type_id` (`vehicle_type_id`),
KEY `ymm` (`year`,`make_id`,`model_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
表状态: 有趣的TABLE_ROWS
是,两者都是错误的。select count(1) from product_to_vehicle;
在这两种情况下都给我 18330148 和 8.0 表是转储和导入到 8.0 的结果,所以没有理由这些应该不同。
8.0.20 上的表状态
+--------------------+--------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+----------------+
| TABLE_NAME | ENGINE | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH | MAX_DATA_LENGTH | INDEX_LENGTH | DATA_FREE | AUTO_INCREMENT | CREATE_TIME | UPDATE_TIME | CHECK_TIME | TABLE_COLLATION | CHECKSUM | CREATE_OPTIONS |
+--------------------+--------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+----------------+
| base_vehicle | InnoDB | 10 | Dynamic | 72210 | 36 | 2637824 | 0 | 12681216 | 4194304 | 150814 | 2020-05-14 04:16:34 | NULL | NULL | utf8_general_ci | NULL | |
| product_to_store | InnoDB | 10 | Dynamic | 2636946 | 32 | 86622208 | 0 | 124452864 | 5242880 | NULL | 2020-05-14 04:24:26 | 2020-05-14 04:31:18 | NULL | utf8_general_ci | NULL | |
| product_to_vehicle | InnoDB | 10 | Dynamic | 22502991 | 50 | 1147092992 | 0 | 1274970112 | 7340032 | 23457421 | 2020-05-14 05:15:41 | 2020-05-14 05:24:36 | NULL | utf8_general_ci | NULL | |
| vehicle_type | InnoDB | 10 | Dynamic | 11 | 1489 | 16384 | 0 | 16384 | 0 | 2190 | 2020-05-14 04:29:15 | NULL | NULL | utf8_general_ci | NULL | |
+--------------------+--------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+----------------+
5.7.30 上的表状态
+--------------------+--------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+
| TABLE_NAME | Engine | Version | Row_format | table_rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | table_collation | Checksum | Create_options |
+--------------------+--------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+
| base_vehicle | InnoDB | 10 | Dynamic | 70716 | 52 | 3686400 | 0 | 11124736 | 4194304 | 150814 | 2020-05-14 01:04:16 | NULL | NULL | utf8_general_ci | NULL | |
| product_to_store | InnoDB | 10 | Dynamic | 2517116 | 39 | 99270656 | 0 | 144637952 | 7340032 | NULL | 2020-05-08 22:36:31 | NULL | NULL | utf8_general_ci | NULL | |
| product_to_vehicle | InnoDB | 10 | Dynamic | 15627279 | 37 | 584024064 | 0 | 1739882496 | 685768704 | 23457421 | 2020-05-14 01:03:35 | NULL | NULL | utf8_general_ci | NULL | |
| vehicle_type | InnoDB | 10 | Dynamic | 11 | 1489 | 16384 | 0 | 16384 | 0 | 2190 | 2020-05-08 22:36:31 | NULL | NULL | utf8_general_ci | NULL | |
+--------------------+--------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+
8.0.20
EXPLAIN ANALYZE SELECT DISTINCT vehicle_id, submodel_id, store_id FROM product_to_store pts JOIN product_to_vehicle ptv USING (product_id) WHERE vehicle_id != 0 AND pts.store_id = 21;
| -> Table scan on <temporary> (actual time=0.001..3.453 rows=60193 loops=1)
-> Temporary table with deduplication (actual time=27786.823..27795.343 rows=60193 loops=1)
-> Nested loop inner join (cost=3222988.86 rows=14633875) (actual time=0.064..6910.370 rows=8610547 loops=1)
-> Index lookup on pts using store_id (store_id=21) (cost=81628.75 rows=813308) (actual time=0.041..176.566 rows=420673 loops=1)
-> Filter: (ptv.vehicle_id <> 0) (cost=0.26 rows=18) (actual time=0.006..0.014 rows=20 loops=420673)
-> Index lookup on ptv using product_vehicle_submodel (product_id=pts.product_id) (cost=0.26 rows=36) (actual time=0.006..0.011 rows=20 loops=420673)
5.7.30
EXPLAIN format = JSON SELECT DISTINCT vehicle_id, submodel_id, store_id FROM product_to_store pts JOIN product_to_vehicle ptv USING (product_id) WHERE vehicle_id != 0 AND pts.store_id = 21;
{
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "2711880.30"
},
"duplicates_removal": {
"using_temporary_table": true,
"using_filesort": false,
"nested_loop": [
{
"table": {
"table_name": "pts",
"access_type": "ref",
"possible_keys": [
"PRIMARY",
"product_id",
"store_id",
"store_product"
],
"key": "store_product",
"used_key_parts": [
"store_id"
],
"key_length": "4",
"ref": [
"const"
],
"rows_examined_per_scan": 547242,
"rows_produced_per_join": 547242,
"filtered": "100.00",
"using_index": true,
"cost_info": {
"read_cost": "1067.75",
"eval_cost": "109448.40",
"prefix_cost": "110516.15",
"data_read_per_join": "8M"
},
"used_columns": [
"product_id",
"store_id"
]
}
},
{
"table": {
"table_name": "ptv",
"access_type": "ref",
"possible_keys": [
"product_vehicle_submodel",
"vehicle_product"
],
"key": "product_vehicle_submodel",
"used_key_parts": [
"product_id"
],
"key_length": "4",
"ref": [
"pts.product_id"
],
"rows_examined_per_scan": 18,
"rows_produced_per_join": 5097113,
"filtered": "50.00",
"using_index": true,
"cost_info": {
"read_cost": "562530.32",
"eval_cost": "1019422.75",
"prefix_cost": "2711880.30",
"data_read_per_join": "77M"
},
"used_columns": [
"product_to_vehicle_id",
"product_id",
"vehicle_id",
"submodel_id"
],
"attached_condition": "(`ptv`.`vehicle_id` <> 0)"
}
}
]
}
}
}
这是第二个查询的比较:
8.0.20
{
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "7186.24"
},
"duplicates_removal": {
"using_temporary_table": true,
"using_filesort": false,
"nested_loop": [
{
"table": {
"table_name": "vt",
"access_type": "index",
"possible_keys": [
"PRIMARY"
],
"key": "vehicle_type_name",
"used_key_parts": [
"vehicle_type_name"
],
"key_length": "194",
"rows_examined_per_scan": 11,
"rows_produced_per_join": 11,
"filtered": "100.00",
"using_index": true,
"cost_info": {
"read_cost": "0.25",
"eval_cost": "1.10",
"prefix_cost": "1.35",
"data_read_per_join": "2K"
},
"used_columns": [
"vehicle_type_id",
"vehicle_type_name"
]
}
},
{
"table": {
"table_name": "bv",
"access_type": "ref",
"possible_keys": [
"vehicle_type_id"
],
"key": "vehicle_type_id",
"used_key_parts": [
"vehicle_type_id"
],
"key_length": "2",
"ref": [
"vt.vehicle_type_id"
],
"rows_examined_per_scan": 6519,
"rows_produced_per_join": 71712,
"filtered": "100.00",
"using_index": true,
"cost_info": {
"read_cost": "13.69",
"eval_cost": "7171.20",
"prefix_cost": "7186.24",
"data_read_per_join": "1M"
},
"used_columns": [
"vehicle_id",
"vehicle_type_id"
]
}
}
]
}
}
}
5.7.30
{
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "14684.01"
},
"duplicates_removal": {
"using_temporary_table": true,
"using_filesort": false,
"nested_loop": [
{
"table": {
"table_name": "vt",
"access_type": "index",
"possible_keys": [
"PRIMARY"
],
"key": "vehicle_type_name",
"used_key_parts": [
"vehicle_type_name"
],
"key_length": "194",
"rows_examined_per_scan": 11,
"rows_produced_per_join": 11,
"filtered": "100.00",
"using_index": true,
"cost_info": {
"read_cost": "1.00",
"eval_cost": "2.20",
"prefix_cost": "3.20",
"data_read_per_join": "2K"
},
"used_columns": [
"vehicle_type_id",
"vehicle_type_name"
]
}
},
{
"table": {
"table_name": "bv",
"access_type": "ref",
"possible_keys": [
"vehicle_type_id"
],
"key": "vehicle_type_id",
"used_key_parts": [
"vehicle_type_id"
],
"key_length": "2",
"ref": [
"vt.vehicle_type_id"
],
"rows_examined_per_scan": 6647,
"rows_produced_per_join": 73126,
"filtered": "100.00",
"using_index": true,
"cost_info": {
"read_cost": "55.61",
"eval_cost": "14625.20",
"prefix_cost": "14684.01",
"data_read_per_join": "1M"
},
"used_columns": [
"vehicle_id",
"vehicle_type_id"
]
}
}
]
}
}
}
奇怪的是,这些数字似乎表明 MySQL 8 的总体成本较低,但执行速度仍然慢得多。
我最近开发了一个脚本,该脚本生成多个进程以使用 mysqlimport 和 --tab 类型的 mysqldump 导出并行导入表。在开发服务器上,它工作得很好,与标准的 mysql db_name < backup.sql 类型的导入相比,它将时间从大约 15 分钟缩短到 4 或 5 分钟。
问题出在我们的生产服务器上,这个脚本似乎在系统范围内锁定表。也就是说,我正在将备份导入到一个完全不同的数据库,但我们的实时应用程序表最终仍处于锁定状态。SHOW PROCESSLIST 确认我们的实时数据库上的表确实被锁定,但没有 INSERT 或 UPDATE 查询正在该数据库中的任何表上运行。
为什么会这样?是否有我可以调整的配置变量/设置以防止发生此锁定?
假设我有这样的查询:
SELECT *
FROM table_a
JOIN table_b USING (id)
WHERE table_b.column = 1
我有一个索引id
和一个索引,column
但我经常添加一个复合索引,两者都可以提高这样的查询效率。我的问题是关于索引中列的顺序。通过反复试验,我发现有时 DBMS 更喜欢连接索引,有时它更喜欢WHERE
索引。
在上面的查询中,我可以遵守一个硬性、快速的规则来知道哪个键顺序最有效?
通常我只是添加两个索引,EXPLAIN
在查询上运行并检查哪个是首选,然后删除另一个。但是这个过程感觉可以通过更好地理解确定索引顺序所涉及的逻辑来改进。