AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题

问题[hadoop](server)

Martin Hope
John R
Asked: 2020-06-26 14:49:26 +0800 CST

用于 HDFS 的 EC2 实例存储的最佳 RAID 配置

  • 0

我正在尝试确定在用于 HDFS 的 3x d2.2xlarge 实例的实例存储上配置 RAID 阵列是否有任何实际优势。最初我计划只挂载每个存储并将其添加为 Hadoop 的附加数据目录。但似乎 RAID 0 或 10 配置可能会带来一些额外的性能提升。由于持久性由 HDFS 本​​身处理,因此无需从该角度考虑 RAID 1 或 5(例如:如果一个实例上的一个或所有存储失败,则由其他数据节点的复制提供持久性)。RAID 6 似乎不切实际,因为已知的问题是重建时间长,并且由于 2x 奇偶校验写入而降低了吞吐量性能(再次似乎最好让 HDFS 处理持久性)。这使得 RAID 0 和 10 在理论上都比标准 HDD 具有更好的磁盘 I/O。

amazon-ec2 raid hadoop hdfs amazon-ephemeral
  • 1 个回答
  • 242 Views
Martin Hope
innervoice
Asked: 2020-01-21 21:59:26 +0800 CST

列出 hdfs 目录下的所有文件

  • 1

由于某个组件的一些错误,HDFS 中的文件累积并且数量巨大,即 2123516。我想列出所有文件并将它们的名称复制到一个文件中,但是当我运行以下命令时,它会给出 Java 堆空间错误.

hdfs dfs -ls /tmp/content/

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:3332)
    at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)
    at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)
    at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421)
    at java.lang.StringBuffer.append(StringBuffer.java:272)
    at java.net.URI.appendSchemeSpecificPart(URI.java:1911)
    at java.net.URI.toString(URI.java:1941)
    at java.net.URI.<init>(URI.java:742)
    at org.apache.hadoop.fs.Path.initialize(Path.java:145)
    at org.apache.hadoop.fs.Path.<init>(Path.java:126)
    at org.apache.hadoop.fs.Path.<init>(Path.java:50)
    at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:215)
    at org.apache.hadoop.hdfs.DistributedFileSystem.makeQualified(DistributedFileSystem.java:252)
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:311)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:842)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:902)
    at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1032)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:987)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:965)
    at org.apache.hadoop.fs.shell.Command.runAll(Command.java:62)
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:1822)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)

有没有其他方法可以列出文件以及列出 2400000 个文件需要多少堆空间?

hadoop
  • 1 个回答
  • 818 Views
Martin Hope
Sedat Kestepe
Asked: 2017-04-08 00:38:28 +0800 CST

Namenodes 在 HA 集群上启动失败 - Journalnode 日志中存在致命错误

  • 0

我的 Hadoop 集群有问题

Centos 7.3 Hortonworks Ambari 2.4.2 Hortonworks HDP 2.5.3

安巴里标准错误:

2017-04-06 10:49:49,039 - Getting jmx metrics from NN failed. URL: http://master02.mydomain.local:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/jmx.py", line 38, in get_value_from_jmx
    _, data, _ = get_user_call_output(cmd, user=run_user, quiet=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
ExecutionFailed: Execution of 'curl -s 'http://master02.mydomain.local:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem' 1>/tmp/tmp0CNZmD 2>/tmp/tmpRAZgwz' returned 7. 

2017-04-06 10:49:51,041 - Getting jmx metrics from NN failed. URL: http://master03.mydomain.local:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/jmx.py", line 38, in get_value_from_jmx
    _, data, _ = get_user_call_output(cmd, user=run_user, quiet=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
ExecutionFailed: Execution of 'cur

l -s 'http://master03.mydomain.local:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem' 1>/tmp/tmp_hLNY7 2>/tmp/tmpoCOTt8' returned 7. 
...
(tries several times and then)
...
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 420, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 101, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 184, in namenode
    if is_this_namenode_active() is False:
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/decorator.py", line 55, in wrapper
    return function(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 562, in is_this_namenode_active
    raise Fail(format("The NameNode {namenode_id} is not listed as Active or Standby, waiting..."))
resource_management.core.exceptions.Fail: The NameNode nn1 is not listed as Active or Standby, waiting...

安巴里标准输出:

2017-04-06 10:53:20,521 - call returned (255, '17/04/06 10:53:20 INFO ipc.Client: Retrying connect to server: master03.mydomain.local/10.0.109.21:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)\n17/04/06 10:53:20 WARN ipc.Client: Failed to connect to server: master03.mydomain.local/10.0.109.21:8020: retries get failed due to exceeded maximum allowed retries number: 1
2017-04-06 10:53:20,522 - No active NameNode was found after 5 retries. Will return current NameNode HA states

名称节点日志:

2017-04-06 10:11:43,561FATALError: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [10.0.109.20:8485, 10.0.109.21:8485, 10.0.109.22:8485], stream=null)) java.lang.AssertionError: Decided to synchronize log to startTxId: 1 endTxId: 1 isInProgress: true but logger 10.0.109.20:8485 had seen txid 1865764 committed at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnclosedSegment(QuorumJournalManager.java:336) at (some class at some other class at ...)

来自 Namenode 的更多日志:

2017-04-06 10:11:42,380 INFO  ipc.Server (Server.java:logException(2401)) - IPC Server handler 72 on 8020, call org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.sendHeartbeat from 9.1.10.14:37173 Call#2322 Retry#0
org.apache.hadoop.ipc.RetriableException: NameNode still not started
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2057)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendHeartbeat(NameNodeRpcServer.java:1414)
        at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.sendHeartbeat(DatanodeProtocolServerSideTranslatorPB.java:118)
        at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:29064)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
2017-04-06 10:11:42,390 INFO  namenode.NameNode (NameNode.java:startCommonServices(876)) - NameNode RPC up at: bigm02.etstur.local/9.1.10.21:8020
2017-04-06 10:11:42,391 INFO  namenode.FSNamesystem (FSNamesystem.java:startStandbyServices(1286)) - Starting services required for standby state
2017-04-06 10:11:42,393 INFO  ha.EditLogTailer (EditLogTailer.java:<init>(117)) - Will roll logs on active node at bigm03.etstur.local/9.1.10.22:8020 every 120 seconds.
2017-04-06 10:11:42,397 INFO  ha.StandbyCheckpointer (StandbyCheckpointer.java:start(129)) - Starting standby checkpoint thread...
Checkpointing active NN at http://bigm03.etstur.local:50070
Serving checkpoints at http://bigm02.etstur.local:50070
2017-04-06 10:11:43,371 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1329)) - Stopping services started for standby state
2017-04-06 10:11:43,372 WARN  ha.EditLogTailer (EditLogTailer.java:doWork(349)) - Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:347)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2017-04-06 10:11:43,475 INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1130)) - Starting services required for active state
2017-04-06 10:11:43,485 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnfinalizedSegments(435)) - Starting recovery process for unclosed journal segments...
2017-04-06 10:11:43,534 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnfinalizedSegments(437)) - Successfully started new epoch 17
2017-04-06 10:11:43,535 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnclosedSegment(263)) - Beginning recovery of unclosed segment starting at txid 1
2017-04-06 10:11:43,557 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnclosedSegment(272)) - Recovery prepare phase complete. Responses:
9.1.10.20:8485: segmentState { startTxId: 1 endTxId: 1 isInProgress: true } lastWriterEpoch: 14 lastCommittedTxId: 1865764
9.1.10.21:8485: segmentState { startTxId: 1 endTxId: 1 isInProgress: true } lastWriterEpoch: 14 lastCommittedTxId: 1865764
2017-04-06 10:11:43,560 INFO  client.QuorumJournalManager (QuorumJournalManager.java:recoverUnclosedSegment(296)) - Using longest log: 9.1.10.20:8485=segmentState {
  startTxId: 1
  endTxId: 1
  isInProgress: true
}
lastWriterEpoch: 14
lastCommittedTxId: 1865764


2017-04-06 10:11:43,561 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [9.1.10.20:8485, 9.1.10.21:8485, 9.1.10.22:8485], stream=null))
java.lang.AssertionError: Decided to synchronize log to startTxId: 1
endTxId: 1
isInProgress: true
 but logger 9.1.10.20:8485 had seen txid 1865764 committed
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnclosedSegment(QuorumJournalManager.java:336)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:455)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1459)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1139)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1915)
        at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
        at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1783)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1631)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
        at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
2017-04-06 10:11:43,562 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2017-04-06 10:11:43,563 INFO  namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at bigm02.etstur.local/9.1.10.21
************************************************************/

虽然日志节点成功启动,但它们有以下错误,也可以发现它是可疑的:

2017-04-05 17:15:05,653 ERROR RECEIVED SIGNAL 15: SIGTERM

这个错误的背景如下......

昨天我注意到其中一个数据节点失败并停止了。日志中有以下错误:

2017-04-05 15:50:11,168 ERROR datanode.DataNode (BPServiceActor.java:run(752)) - Initialization failed for Block pool <registering> (Datanode Uuid be2286f5-00d7-4758-b89a-45e2304cabe3) service to master02.mydomain.local/10.0.109.23:8020. Exiting. java.io.IOException: All specified directories are failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:596) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1483) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1448) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:267) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:740) at java.lang.Thread.run(Thread.java:745) 2017-04-05 15:50:11,168 ERROR datanode.DataNode (BPServiceActor.java:run(752)) - Initialization failed for Block pool <registering> (Datanode Uuid be2286f5-00d7-4758-b89a-45e2304cabe3) service to master02.mydomain.local/10.0.109.23:8020. Exiting. org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 13, volumes configured: 14, volumes failed: 1, volume failures tolerated: 0

2017-04-05 17:15:36,968 INFO  common.Storage (Storage.java:tryLock(774)) - Lock on /grid/13/hadoop/hdfs/data/in_use.lock
 acquired by nodename [email protected]

虽然看到音量错误,但我能够浏览 /grid/13/

所以我想在这个stackoverflow问题中尝试以下答案:

Datanode 无法正确启动

首先,我删除了 /grid/13/hadoop/hdfs (/grid/13/hadoop/hdfs/data) 下的数据文件夹并尝试启动 datanode。

它再次失败并出现同样的错误,所以我使用了 namenode 格式。我的集群是新的和空的,所以我对任何解决方案都很好,包括格式:

(在第一次尝试中,我给出了块池 ID 而不是 clusterId,命令失败。)

./hdfs namenode -format -clusterId <myClusterId>

在这种格式之后,其中一个名称节点失败了。当我尝试重新启动所有 HDFS 组件时,两个名称节点都失败了。

任何意见表示赞赏。

hadoop
  • 2 个回答
  • 5189 Views
Martin Hope
D. Müller
Asked: 2016-12-02 06:31:09 +0800 CST

为 Linux KDC 领域设置 Windows 10 客户端

  • 7

我设置了一个 KDC 服务器并创建了一个 Realm EXAMPLE.COM。这是我的 krb5.conf 文件:

[libdefaults]
  renew_lifetime = 7d
  forwardable = true
  default_realm = EXAMPLE.COM
  ticket_lifetime = 24h
  dns_lookup_realm = false
  dns_lookup_kdc = false
  default_ccache_name = /tmp/krb5cc_%{uid}
  #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
  #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5

[logging]
  default = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log
  kdc = FILE:/var/log/krb5kdc.log

[realms]
  EXAMPLE.COM = {
    admin_server = my.linux-server.de
    kdc = my.linux-server.de
  }

我还通过以下方式添加了一个testuser带密码abc的用户kadmin.local:

kadmin.local:  addprinc [email protected]

我可以成功登录我的 Ubuntu 虚拟机:

[root@ubuntu-vm ~]# kinit testuser
Password for [email protected]:

然后klist显示:

[root@ubuntu-vm ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: [email protected]

Valid starting       Expires              Service principal
01.12.2016 14:58:40  02.12.2016 14:58:40  krbtgt/[email protected]

我可以打开我的 Kererized Hadoop UI。

==================================================== =======================

问题使我的 Windows 客户端。我通过将krb5.conf文件从 KDC 机器复制到 Windows 客户端并将其重命名为kdc5.ini.

我还设置了计算机的域:

C:> Ksetup /setdomain EXAMPLE.COM

重新启动后,我尝试通过以下方式连接到我的 KDC 领域

C:> kinit [email protected]
Password for [email protected]:
<empty row>

到目前为止一切看起来都很好,但是当我打电话时,klist我只得到以下结果:

Aktuelle Anmelde-ID ist 0:0x7eca34

Zwischengespeicherte Tickets: (0)

在英语中类似... cached tickets: (0)

我也无法在 Windows 客户端上打开我的网站,所以我想这是一个互操作性问题,因为通过我的 Ubuntu 客户端连接没有任何问题。

我的浏览器(Firefox)应该在两台机器(Ubuntu 和 Windows)上正确配置,我将network.negotiate-auth.trusted-uris属性设置为http://my.linux-server.de(因为我这样做了,Ubuntu 客户端可以打开该站点)。Curl 也适用于 Ubuntu,但不适用于 Windows。

更新:还尝试了第二个 Windows 客户端但没有任何成功...

windows linux hadoop kerberos
  • 1 个回答
  • 11215 Views
Martin Hope
mart
Asked: 2016-10-28 07:29:35 +0800 CST

Hadoop 数据节点 - 从一个磁盘开始,稍后添加更多,或者从尽可能多的磁盘开始并平均填充它们

  • 0

关于 Hadoop 集群中的 Datanode 磁盘设置,我想知道以下内容。这两个选项哪个更好:

  1. 将一个(或几个)磁盘添加到 Datanode,并在它们开始填充后附加更多。

  2. 或者从一开始就从尽可能多的磁盘开始并同时填充它们。

其他两个相关的问题:最好获得尽可能大的驱动器,以便为有限数量的驱动器插槽获得最大容量?

单个Datanode可以支持多少存储?(当然这取决于 Datanode 硬件规范,但仍然......任何近似限制?)

hadoop
  • 1 个回答
  • 444 Views
Martin Hope
Robert Fraser
Asked: 2016-10-05 09:34:15 +0800 CST

仅使用 1 个数据节点运行 HDFS - 附加失败

  • 2

我正在尝试使用Docker Compose测试一些需要 HDFS 的服务。由于正在测试的服务、namenode 和数据节点都将在同一台物理机器(开发笔记本电脑)上运行,因此最好通过只运行一个数据节点来减少内存使用。我正在使用这些 docker 图像。

如果我运行一个名称节点和 3 个数据节点,一切都会按预期工作。我试图通过在hdfs-site.xml两个节点中设置它来仅运行一个数据节点,并通过 compose 仅​​运行 1 个数据节点:

<property><name>dfs.replication</name><value>1</value></property>

它肯定会解决这个问题,因为当它启动时,我在日志中看到了这个:

blockmanagement.BlockManager: defaultReplication         = 1
blockmanagement.BlockManager: maxReplication             = 512
blockmanagement.BlockManager: minReplication             = 1
blockmanagement.BlockManager: maxReplicationStreams      = 2
blockmanagement.BlockManager: replicationRecheckInterval = 3000

第一次写入成功就好了。对于第二次写入,我得到了这个(在客户端应用程序上;在 hadoop 端没有关于它的日志):

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]], original=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:929)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)

之后的每次写入都会在客户端和 HDFS 上出现此错误:

Failed to APPEND_FILE (whatever) for (client X) on 172.18.0.6 because this file lease is currently owned by (client Y) on 172.18.0.6

如果运行 3 个数据节点,这个问题就会神奇地消失。有没有人有在 docker 中运行一个名称节点和一个数据节点的经验?我可怜的小笔记本电脑无法处理 3 个数据节点的功率水平。

编辑:我在这里尝试了这个解决方案。没有骰子。现在我得到:

17:59:56 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): This feature is disabled.  Please refer to dfs.client.block.write.replace-datanode-on-failure.enable configuration property.
    at org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure.checkEnabled(ReplaceDatanodeOnFailure.java:116)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3317)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:758)
    [...]

HDFS端的完整日志(name是namenode,data是datanode;由于docker-compose,日志的交错不完全按时间顺序排列):

name  | 16/10/03 18:03:43 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.18.0.11:50010, datanodeUuid=8ad27f17-7a87-45cb-b782-981c2e7b6dc2, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-22dd8c41-af12-41ad-81ef-832ebb10ec39;nsid=1117453574;c=0) storage 8ad27f17-7a87-45cb-b782-981c2e7b6dc2
name  | 16/10/03 18:03:43 INFO blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
data  | 16/10/03 18:03:43 INFO datanode.VolumeScanner: VolumeScanner(/hadoop/dfs/data, DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6): finished scanning block pool BP-1023406345-172.18.0.9-1475517812059
data  | 16/10/03 18:03:43 INFO datanode.DataNode: Block pool Block pool BP-1023406345-172.18.0.9-1475517812059 (Datanode Uuid null) service to hadoop-nn1/172.18.0.9:8020 successfully registered with NN
data  | 16/10/03 18:03:43 INFO datanode.DataNode: For namenode hadoop-nn1/172.18.0.9:8020 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
name  | 16/10/03 18:03:43 INFO net.NetworkTopology: Adding a new node: /default-rack/172.18.0.11:50010
name  | 16/10/03 18:03:44 INFO blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
data  | 16/10/03 18:03:44 INFO datanode.VolumeScanner: VolumeScanner(/hadoop/dfs/data, DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6): no suitable block pools found to scan.  Waiting 1814399359 ms.
data  | 16/10/03 18:03:44 INFO datanode.DataNode: Namenode Block pool BP-1023406345-172.18.0.9-1475517812059 (Datanode Uuid 8ad27f17-7a87-45cb-b782-981c2e7b6dc2) service to hadoop-nn1/172.18.0.9:8020 trying to claim ACTIVE state with txid=1
name  | 16/10/03 18:03:44 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6 for DN 172.18.0.11:50010
data  | 16/10/03 18:03:44 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1023406345-172.18.0.9-1475517812059 (Datanode Uuid 8ad27f17-7a87-45cb-b782-981c2e7b6dc2) service to hadoop-nn1/172.18.0.9:8020
data  | 16/10/03 18:03:44 INFO datanode.DataNode: Successfully sent block report 0x8b0c17676f1,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 16 msec to generate and 190 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
name  | 16/10/03 18:03:44 INFO BlockStateChange: BLOCK* processReport: from storage DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6 node DatanodeRegistration(172.18.0.11:50010, datanodeUuid=8ad27f17-7a87-45cb-b782-981c2e7b6dc2, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-22dd8c41-af12-41ad-81ef-832ebb10ec39;nsid=1117453574;c=0), blocks: 0, hasStaleStorage: false, processing time: 2 msecs
name  | 16/10/03 18:04:33 INFO hdfs.StateChange: DIR* completeFile: /XXX/appender/1475517840000/.write/172.18.0.6 is closed by DFSClient_NONMAPREDUCE_1250587730_30
data  | 16/10/03 18:03:44 INFO datanode.DataNode: Got finalize command for block pool BP-1023406345-172.18.0.9-1475517812059
data  | 16/10/03 18:04:34 INFO datanode.DataNode: Receiving BP-1023406345-172.18.0.9-1475517812059:blk_1073741825_1001 src: /172.18.0.6:39732 dest: /172.18.0.11:50010
data  | 16/10/03 18:04:34 INFO DataNode.clienttrace: src: /172.18.0.6:39732, dest: /172.18.0.11:50010, bytes: 7421, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1250587730_30, offset: 0, srvID: 8ad27f17-7a87-45cb-b782-981c2e7b6dc2, blockid: BP-1023406345-172.18.0.9-1475517812059:blk_1073741825_1001, duration: 107663969
name  | 16/10/03 18:04:33 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6:NORMAL:172.18.0.11:50010|RBW]]} for /XXX/appender/1475517840000/172.18.0.6
name  | 16/10/03 18:04:34 INFO namenode.FSNamesystem: BLOCK* blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6:NORMAL:172.18.0.11:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 <  minimum = 1) in file /XXX/appender/1475517840000/172.18.0.6
name  | 16/10/03 18:04:34 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.18.0.11:50010 is added to blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-593eb971-f0cc-4381-a2c7-0befbc4aa9e6:NORMAL:172.18.0.11:50010|RBW]]} size 7421
name  | 16/10/03 18:04:34 INFO hdfs.StateChange: DIR* completeFile: /XXX/appender/1475517840000/172.18.0.6 is closed by DFSClient_NONMAPREDUCE_1250587730_30
name  | 16/10/03 18:04:45 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 21 Number of transactions batched in Syncs: 1 Number of syncs: 8 SyncTimes(ms): 17 
name  | 16/10/03 18:04:45 INFO hdfs.StateChange: DIR* completeFile: /XXX/appender/1475517840000/.write/172.18.0.6 is closed by DFSClient_NONMAPREDUCE_-1821674544_30
name  | 16/10/03 18:04:48 WARN hdfs.StateChange: DIR* NameSystem.append: Failed to APPEND_FILE /XXX/appender/1475517840000/172.18.0.6 for DFSClient_NONMAPREDUCE_1129971636_30 on 172.18.0.6 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-1821674544_30 on 172.18.0.6

hdfs.site.xml(名称节点):

<configuration>
<property><name>dfs.namenode.name.dir</name><value>file:///hadoop/dfs/name</value></property>
<property><name>dfs.replication</name><value>1</value></property>
<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
</configuration>

hdfs-site.xml(数据节点):

<configuration>
<property><name>dfs.datanode.data.dir</name><value>file:///hadoop/dfs/data</value></property>
<property><name>dfs.replication</name><value>1</value></property>
<property><name>dfs.namenode.rpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.servicerpc-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.http-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.namenode.https-bind-host</name><value>0.0.0.0</value></property>
<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>
<property><name>dfs.datanode.use.datanode.hostname</name><value>true</value></property>
</configuration>
hadoop hdfs docker
  • 1 个回答
  • 1614 Views
Martin Hope
coderkid
Asked: 2016-09-27 09:03:58 +0800 CST

可以在不使用 -i 标志作为密钥的情况下 ssh 进入服务器吗?

  • 2

我有 3 个 EC2 实例,它们都使用相同的私钥。我在这些节点之间建立了一个 hadoop 集群,它们需要无密码输入才能工作。

如何使用此私钥轻松 ssh 进入具有无密钥进入的服务器?

我唯一拥有的是一个 .pem 文件。我已将文件 scp'ed 到主服务器中。

amazon-ec2 ssh hadoop ssh-keys
  • 1 个回答
  • 192 Views
Martin Hope
Oleksandr
Asked: 2016-07-18 06:46:05 +0800 CST

为什么 DFSZKFailoverController 在 hadoop 中杀死 Namenode 进程?

  • 0

我尝试按照本教程配置 hadoop 高可用性集群:
http

://www.edureka.co/blog/how-to-set-up-hadoop-cluster-with-hdfs-high-availability/ 当我按照那篇文章我面临两个主要问题:
1. hdfs namenode -bootstrapStandby (我不能使用这个命令,因为备用节点上的Namenode 没有启动。)为了解决这个问题,我在使用这个命令之前在备用节点上手动运行namenode。
2. 当我运行第二个 ZKFC(在备用节点上)时,它会杀死 Namenode 进程,我什至无法手动启动它。这就是为什么 Namenode 只在 Active 节点上启动。如果我们杀死活动节点,备用节点将继续不做任何事情(它不会启动 Namenode)。

有人知道那篇文章有什么问题吗?

hadoop hdfs high-availability zookeeper
  • 1 个回答
  • 716 Views
Martin Hope
Ianvdl
Asked: 2015-07-31 01:21:51 +0800 CST

为什么 Accumulo 除了 Zookeeper ensemble 的 IP 之外还需要 $ZOOKEEPER_HOME?

  • 0

根据文档,Accumulo 要求您在配置文件中设置 $ZOOKEEPER_HOME(本地路径),并且还需要 Zookeeper 集成的 IP 列表。为什么只有 IP 还不够?

如果您的 Zookeeper 集成与 Accumulo 集群是分开的,并且没有 $ZOOKEEPER_HOME 的本地路径怎么办?

(目前没有累加标签)

hadoop
  • 1 个回答
  • 284 Views
Martin Hope
Jas
Asked: 2015-01-23 02:49:56 +0800 CST

有没有办法在hdfs中grep gzip压缩的内容而不提取它?

  • 4

我正在寻找一种归档zgrep hdfs方式

就像是:

hadoop fs -zcat hdfs://myfile.gz | grep "hi"

或者

hadoop fs -cat hdfs://myfile.gz | zgrep "hi"

它对我来说真的不起作用,无论如何用命令行来实现它?

hadoop
  • 3 个回答
  • 19772 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve