我正在尝试确定在用于 HDFS 的 3x d2.2xlarge 实例的实例存储上配置 RAID 阵列是否有任何实际优势。最初我计划只挂载每个存储并将其添加为 Hadoop 的附加数据目录。但似乎 RAID 0 或 10 配置可能会带来一些额外的性能提升。由于持久性由 HDFS 本身处理,因此无需从该角度考虑 RAID 1 或 5(例如:如果一个实例上的一个或所有存储失败,则由其他数据节点的复制提供持久性)。RAID 6 似乎不切实际,因为已知的问题是重建时间长,并且由于 2x 奇偶校验写入而降低了吞吐量性能(再次似乎最好让 HDFS 处理持久性)。这使得 RAID 0 和 10 在理论上都比标准 HDD 具有更好的磁盘 I/O。
我正在尝试使用Docker Compose测试一些需要 HDFS 的服务。由于正在测试的服务、namenode 和数据节点都将在同一台物理机器(开发笔记本电脑)上运行,因此最好通过只运行一个数据节点来减少内存使用。我正在使用这些 docker 图像。
如果我运行一个名称节点和 3 个数据节点,一切都会按预期工作。我试图通过在hdfs-site.xml
两个节点中设置它来仅运行一个数据节点,并通过 compose 仅运行 1 个数据节点:
<property><name>dfs.replication</name><value>1</value></property>
它肯定会解决这个问题,因为当它启动时,我在日志中看到了这个:
blockmanagement.BlockManager: defaultReplication = 1
blockmanagement.BlockManager: maxReplication = 512
blockmanagement.BlockManager: minReplication = 1
blockmanagement.BlockManager: maxReplicationStreams = 2
blockmanagement.BlockManager: replicationRecheckInterval = 3000
第一次写入成功就好了。对于第二次写入,我得到了这个(在客户端应用程序上;在 hadoop 端没有关于它的日志):
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]], original=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:929)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)
之后的每次写入都会在客户端和 HDFS 上出现此错误:
Failed to APPEND_FILE (whatever) for (client X) on 172.18.0.6 because this file lease is currently owned by (client Y) on 172.18.0.6
如果运行 3 个数据节点,这个问题就会神奇地消失。有没有人有在 docker 中运行一个名称节点和一个数据节点的经验?我可怜的小笔记本电脑无法处理 3 个数据节点的功率水平。
编辑:我在这里尝试了这个解决方案。没有骰子。现在我得到:
17:59:56 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): This feature is disabled. Please refer to dfs.client.block.write.replace-datanode-on-failure.enable configuration property.
at org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure.checkEnabled(ReplaceDatanodeOnFailure.java:116)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3317)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:758)
[...]
HDFS端的完整日志(name
是namenode,data
是datanode;由于docker-compose,日志的交错不完全按时间顺序排列):
name | 16/10/03 18:03:43 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.18.0.11:50010, datanodeUuid=8ad27f17-7a87-45cb-b782-981c2e7b6dc2, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-22dd8c41-af12-41ad-81ef-832ebb10ec39;nsid=1117453574;c=0) storage 8ad27f17-7a87-45cb-b782-981c2e7b6dc2
name | 16/10/03 18:03:43 INFO blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
data | 16/10/03 18:03:43 INFO datanode.VolumeScanner: VolumeScanner(/hadoop/dfs/data, DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6): finished scanning block pool BP-1023406345-172.18.0.9-1475517812059
data | 16/10/03 18:03:43 INFO datanode.DataNode: Block pool Block pool BP-1023406345-172.18.0.9-1475517812059 (Datanode Uuid null) service to hadoop-nn1/172.18.0.9:8020 successfully registered with NN
data | 16/10/03 18:03:43 INFO datanode.DataNode: For namenode hadoop-nn1/172.18.0.9:8020 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
name | 16/10/03 18:03:43 INFO net.NetworkTopology: Adding a new node: /default-rack/172.18.0.11:50010
name | 16/10/03 18:03:44 INFO blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
data | 16/10/03 18:03:44 INFO datanode.VolumeScanner: VolumeScanner(/hadoop/dfs/data, DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6): no suitable block pools found to scan. Waiting 1814399359 ms.
data | 16/10/03 18:03:44 INFO datanode.DataNode: Namenode Block pool BP-1023406345-172.18.0.9-1475517812059 (Datanode Uuid 8ad27f17-7a87-45cb-b782-981c2e7b6dc2) service to hadoop-nn1/172.18.0.9:8020 trying to claim ACTIVE state with txid=1
name | 16/10/03 18:03:44 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6 for DN 172.18.0.11:50010
data | 16/10/03 18:03:44 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1023406345-172.18.0.9-1475517812059 (Datanode Uuid 8ad27f17-7a87-45cb-b782-981c2e7b6dc2) service to hadoop-nn1/172.18.0.9:8020
data | 16/10/03 18:03:44 INFO datanode.DataNode: Successfully sent block report 0x8b0c17676f1, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 16 msec to generate and 190 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
name | 16/10/03 18:03:44 INFO BlockStateChange: BLOCK* processReport: from storage DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6 node DatanodeRegistration(172.18.0.11:50010, datanodeUuid=8ad27f17-7a87-45cb-b782-981c2e7b6dc2, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-22dd8c41-af12-41ad-81ef-832ebb10ec39;nsid=1117453574;c=0), blocks: 0, hasStaleStorage: false, processing time: 2 msecs
name | 16/10/03 18:04:33 INFO hdfs.StateChange: DIR* completeFile: /XXX/appender/1475517840000/.write/172.18.0.6 is closed by DFSClient_NONMAPREDUCE_1250587730_30
data | 16/10/03 18:03:44 INFO datanode.DataNode: Got finalize command for block pool BP-1023406345-172.18.0.9-1475517812059
data | 16/10/03 18:04:34 INFO datanode.DataNode: Receiving BP-1023406345-172.18.0.9-1475517812059:blk_1073741825_1001 src: /172.18.0.6:39732 dest: /172.18.0.11:50010
data | 16/10/03 18:04:34 INFO DataNode.clienttrace: src: /172.18.0.6:39732, dest: /172.18.0.11:50010, bytes: 7421, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1250587730_30, offset: 0, srvID: 8ad27f17-7a87-45cb-b782-981c2e7b6dc2, blockid: BP-1023406345-172.18.0.9-1475517812059:blk_1073741825_1001, duration: 107663969
name | 16/10/03 18:04:33 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6:NORMAL:172.18.0.11:50010|RBW]]} for /XXX/appender/1475517840000/172.18.0.6
name | 16/10/03 18:04:34 INFO namenode.FSNamesystem: BLOCK* blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6:NORMAL:172.18.0.11:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /XXX/appender/1475517840000/172.18.0.6
name | 16/10/03 18:04:34 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.18.0.11:50010 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6:NORMAL:172.18.0.11:50010|RBW]]} size 7421
name | 16/10/03 18:04:34 INFO hdfs.StateChange: DIR* completeFile: /XXX/appender/1475517840000/172.18.0.6 is closed by DFSClient_NONMAPREDUCE_1250587730_30
name | 16/10/03 18:04:45 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 21 Number of transactions batched in Syncs: 1 Number of syncs: 8 SyncTimes(ms): 17
name | 16/10/03 18:04:45 INFO hdfs.StateChange: DIR* completeFile: /XXX/appender/1475517840000/.write/172.18.0.6 is closed by DFSClient_NONMAPREDUCE_-1821674544_30
name | 16/10/03 18:04:48 WARN hdfs.StateChange: DIR* NameSystem.append: Failed to APPEND_FILE /XXX/appender/1475517840000/172.18.0.6 for DFSClient_NONMAPREDUCE_1129971636_30 on 172.18.0.6 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-1821674544_30 on 172.18.0.6
hdfs.site.xml
(名称节点):
<configuration>
<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
<property><name>dfs.replication</name><value>1</value></property>
<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
</configuration>
hdfs-site.xml
(数据节点):
<configuration>
<property><name>dfs.datanode.data.dir</name><value>file:///hadoop/dfs/data</value></property>
<property><name>dfs.replication</name><value>1</value></property>
<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
</configuration>
我尝试按照本教程配置 hadoop 高可用性集群:
http
://www.edureka.co/blog/how-to-set-up-hadoop-cluster-with-hdfs-high-availability/
当我按照那篇文章我面临两个主要问题:
1. hdfs namenode -bootstrapStandby (我不能使用这个命令,因为备用节点上的Namenode 没有启动。)为了解决这个问题,我在使用这个命令之前在备用节点上手动运行namenode。
2. 当我运行第二个 ZKFC(在备用节点上)时,它会杀死 Namenode 进程,我什至无法手动启动它。这就是为什么 Namenode 只在 Active 节点上启动。如果我们杀死活动节点,备用节点将继续不做任何事情(它不会启动 Namenode)。
有人知道那篇文章有什么问题吗?
根据 HDFS 架构页面,HDFS 是为“流式数据访问”而设计的。我不确定这到底是什么意思,但我猜这意味着像 seek 这样的操作要么被禁用,要么性能欠佳。这是正确的吗?
我对使用 HDFS 存储需要流式传输到浏览器客户端的音频/视频文件感兴趣。大多数流将从开始到结束,但有些流可能会进行大量搜索。
也许还有另一个文件系统可以做得更好?