我有一个非常大的 SQL 转储文件 (30GB),需要在加载回数据库之前对其进行编辑(进行一些查找/替换)。
除了尺寸较大之外,该文件还包含很长的行。除了前 40 行和最后 12 行外,所有其他行的长度都约为 1MB。这些行都是 INSERTO INTO 命令,看起来都很相似:
cat bigdumpfile.sql | cut -c-100
INSERT INTO `table1` VALUES (951068,1407592,0.0267,0.0509,0.121),(285
INSERT INTO `table1` VALUES (238317,1407664,0.008,0.0063,0.1286),(241
INSERT INTO `table1` VALUES (938922,1407739,0.0053,0.0024,0.031),(226
INSERT INTO `table1` VALUES (44678,1407886,0.0028,0.0028,0.0333),(234
INSERT INTO `table1` VALUES (910412,1407961,0.001,0.0014,0),(911017,1
INSERT INTO `table1` VALUES (903890,1408050,0.0066,0.01,0.0287),(9095
INSERT INTO `table1` VALUES (257090,1408136,0.0023,0.0037,0.0196),(56
INSERT INTO `table1` VALUES (593367,1408237,0.0066,0.0117,0.0286),(95
INSERT INTO `table1` VALUES (870488,1408339,0.0131,0.009,0.0135),(870
INSERT INTO `table1` VALUES (282798,1408414,0.0015,0.014,0.014),(2830
...
并行以长行错误结束:
parallel -a bigdumpfile.sql -k sed -i.bak 's/table1/newtable/'
parallel: Error: Command line too long (1018952 >= 63543) at input 0: INSERT INTO `table1...
因为所有的行都是相似的,我只需要在行的开头进行查找/替换,所以我遵循了这个类似问题中的建议,并提出了一个很好的使用--recstart
建议--recend
。然而这些不起作用:
parallel -a bigdumpfile.sql -k --recstart 'INSERT' --recend 'VALUES' sed -i.bak 's/table/newtable/'
parallel: Error: Command line too long (1018952 >= 63543) at input 0: INSERT INTO `table1...
尝试了多种使用方法--block
,但无法使其正常工作。我是一个 GNU 并行新手,做了一些错误的事情或者只是错过了一些明显的事情。任何帮助表示赞赏。谢谢!
这是使用GNU parallel 20240122
.
您应该使用
--pipe
( 或--pipepart
)。如果您的磁盘速度很快:如果它们很慢:
进行调整
-j
以找到最适合您的磁盘的选项。如果您确实想并行运行多个插入:
但正如 Stéphane Chazelas 所建议的:这样做可能会更快: