AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / ubuntu / 问题

问题[hadoop](ubuntu)

Martin Hope
Imed
Asked: 2020-04-17 01:34:51 +0800 CST

Hadoop单节点集群无法启动

  • 0

我正在尝试安装hadoop 2.9.1,ubuntu 19.10并且我执行了此视频中说明的所有步骤https://www.youtube.com/watch?v=Y6oit3rCsZo

问题是当我尝试使用此命令启动单节点集群时:

hduser@-ubuntu:~$ start-dfs.sh

我收到此错误:

localhost: chown: 修改'/usr/local/hadoop-2.9.1/logs'的所有者: 不允许操作

我进行了搜索,发现在 Hadoop 中启动单节点集群时出现“权限被拒绝”错误中给出的一些解决方案

并且在namenode 没有运行...我已经尝试过 sudo chown -R username /usr/local/hadoop/ 请帮助

所以我尝试使用以下方法解决问题:

1-hduser@-ubuntu:~$ chown -R hduser /usr/local/hadoop/hadoop-2.9.1/

哪个返回

chown: modify du propriétaire de '/usr/local/hadoop/hadoop-2.9.1/logs/yarn-imed-resourcemanager-imed-bigdata-ubuntu.out': 操作不允许 chown: modify du propriétaire de '/usr/ local/hadoop/hadoop-2.9.1/logs/SecurityAuth-imed.audit':操作不允许 chown:修改 du propriétaire de '/usr/local/hadoop/hadoop-2.9.1/logs/userlogs':操作不允许chown:修改 du propriétaire de '/usr/local/hadoop/hadoop-2.9.1/logs/yarn-imed-resourcemanager-imed-bigdata-ubuntu.log':操作不允许

chown: 修改'/usr/local/hadoop/hadoop-2.9.1/logs/yarn-imed-resourcemanager-imed-bigdata-ubuntu.out'的所有者: 不允许操作 chown: 修改'/的所有者usr/local/hadoop/hadoop-2.9.1/logs/SecurityAuth-imed.audit':不允许操作 chown:修改'/usr/local/hadoop/hadoop-2.9.1/logs/userlogs'的所有者: Operation not allowed chown: 修改'/usr/local/hadoop/hadoop-2.9.1/logs/yarn-imed-resourcemanager-imed-bigdata-ubuntu.log'的属主: Operation not allowed

2-我试过这个命令:

hduser@-ubuntu:~$ chmod 777 /usr/local/hadoop/hadoop-2.9.1/ 

我得到

chmod: 修改'/usr/local/hadoop/hadoop-2.9.1/'的权限: Operation not allowed

3-我sudo每次都添加喜欢sudo chmod 777 /usr/local/hadoop/hadoop-2.9.1/ ,我得到:

[sudo] Mot de passe de hduser : 

chmod:对dhduser的修改不会出现在sudoers文件中。将报告此事件。roits de '/usr/local/hadoop/hadoop-2.9.1/':不允许操作

4-最后,我/usr/local/hadoop/hadoop-2.9.1/sbin/start-dfs.sh改为使用运行集群节点。但是,我得到了同样的错误!

在 [localhost] localhost 上启动名称节点:chown:修改“/usr/local/hadoop-2.9.1/logs”的所有者:不允许操作

请注意,我的start-dfs.sh文件是:

 <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>

    <property>
    <name>dfs.name.dir</name>
    <value>file:/usr/local/hadoop/hadoopdata/hdfs/namenode</value>
    </property>

    <property>
    <name>dfs.data.dir</name>
    <value>file:/usr/local/hadoop/hadoopdata/hdfs/datanode</value>
    </property>
    </configuration>

并且./.bashrc文件包含这些参数:

export HADOOP_PREFIX=/usr/local/hadoop/hadoop-2.9.1
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.9.1
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

#Native path
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib/native"

#Java path
export JAVA_HOME="/usr/lib/jvm/jdk1.8.0_251"

请问,我该如何解决这个问题?

hadoop
  • 1 个回答
  • 483 Views
Martin Hope
Sanaya
Asked: 2020-02-07 02:00:37 +0800 CST

安装 ssh 服务器时出错:连接到主机 localhost 端口 22:连接被拒绝

  • -3

我正在尝试安装 openssh-server。然后最后我收到错误:

ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: exit-code) since Thu 2020-02-06 13:52:20 +04; 9ms ago
  Process: 17679 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=255)
dpkg: error processing package openssh-server (--configure):
 subprocess installed post-installation script returned error exit status 1
Setting up ssh-import-id (5.5-0ubuntu1) ...
Processing triggers for ureadahead (0.100.0-19.1) ...
Processing triggers for systemd (229-4ubuntu21.27) ...
Processing triggers for ufw (0.35-0ubuntu2) ...
Rules updated for profile 'Apache Full'
Firewall reloaded
Errors were encountered while processing:
 openssh-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
server ssh dpkg hadoop
  • 2 个回答
  • 340 Views
Martin Hope
political science
Asked: 2019-11-13 22:35:04 +0800 CST

在 vmware 虚拟机 ubuntu 19.10 中处理 hadoop 3.1.3 中的低内存

  • 0

我在 Ubuntu 虚拟机上安装了 Hadoop。我下载了源文件,这意味着此页面上的可用版本https://hadoop.apache.org/releases.html 我下载了 3.1.3 文件 https://www.apache.org/dyn/closer.cgi/hadoop /common/hadoop-3.1.3/hadoop-3.1.3-src.tar.gz 安装成功。我收到以下消息 Hadoop安装成功 看来我的内存不足。我为这个 Ubuntu 虚拟机分配了 2 GB 的内存。如果您可以在上面的屏幕截图中看到,我打算使用更多的虚拟机。 ubuntu 19.10 的内存 我需要其他虚拟机来进行各种开发工作。我想知道应该如何或者更确切地说应该在什么范围内增加内存,以便我可以在 ubuntu 19.10 上使用 Hadoop 生态系统。我计划做一些开发工作,以后可以用单个节点完成我想用多个节点工作。我在具有 12 GB RAM 的笔记本电脑上执行所有这些操作。我有以下虚拟机
1)Ubuntu 19.10 VM
2)Debian 10 VM
3)Windows 10 VM
4)Cloudera Hadoop VM
5)Ubuntu 19.10(我已经在这个 VM 中安装了 Hadoop)

我为所有虚拟机分配了 2 GB。我还想知道如上所示,我得到了一个 Hadoop 命令提示符。我怎样才能再次得到它我的意思是如果我关闭机器电源。我注意到上图中的外壳为 debian@4d943db32085:~/hadoop$

debian 是我在 Ubuntu 19.10 中配置的用户名

apache2 memory-usage ram hadoop 19.10
  • 1 个回答
  • 95 Views
Martin Hope
SriniShine
Asked: 2018-09-08 02:04:44 +0800 CST

失败:HiveException java.lang.RuntimeException:无法实例化 org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

  • 4

我正在尝试使用 Hadoop 3.0 运行 Hive 3.1。以下是我的系统配置:

Ubuntu 18.04.1 LTS
Hadoop  version 3.0.3
Hive 3.1.0
Derby 10.14.2

当我执行显示表时;查询我收到以下错误。

FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

以下是配置单元日志文件中的详细错误。

2018-09-05T11:38:25,952  INFO [main] conf.HiveConf: Found configuration file file:/usr/local/apache-hive-3.1.0-bin/conf/hive-site.xml
2018-09-05T11:38:30,549  INFO [main] SessionState: Hive Session ID = 826ec55c-7fca-4fff-baa5-b5a010e5af89
2018-09-05T11:38:35,948  INFO [main] SessionState:
Logging initialized using configuration in jar:file:/usr/local/apache-hive-3.1.0-bin/lib/hive-common-3.1.0.jar!/hive-log4j2.properties Asy$
2018-09-05T11:38:47,015  INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hadoop
2018-09-05T11:38:47,069  INFO [main] session.SessionState: Created local directory: /tmp/mydir
2018-09-05T11:38:47,096  INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hadoop/826ec55c-7fca-4fff-baa5-b5a010e5af89
2018-09-05T11:38:47,104  INFO [main] session.SessionState: Created local directory: /tmp/mydir/826ec55c-7fca-4fff-baa5-b5a010e5af89
2018-09-05T11:38:47,122  INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hadoop/826ec55c-7fca-4fff-baa5-b5a010e5af89/_$
2018-09-05T11:38:47,125  INFO [main] conf.HiveConf: Using the default value passed in for log id: 826ec55c-7fca-4fff-baa5-b5a010e5af89
2018-09-05T11:38:47,126  INFO [main] session.SessionState: Updating thread name to 826ec55c-7fca-4fff-baa5-b5a010e5af89 main
2018-09-05T11:38:50,476  INFO [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] metastore.HiveMetaStore: 0: Opening raw store with implementatio$
2018-09-05T11:38:50,695  WARN [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] metastore.ObjectStore: datanucleus.autoStartMechanismMode is set$
2018-09-05T11:38:50,714  INFO [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] metastore.ObjectStore: ObjectStore, initialize called
2018-09-05T11:38:50,717  INFO [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] conf.MetastoreConf: Found configuration file file:/usr/local/apa$
2018-09-05T11:38:50,719  INFO [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] conf.MetastoreConf: Unable to find config file hivemetastore-sit$
2018-09-05T11:38:50,720  INFO [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] conf.MetastoreConf: Found configuration file null
2018-09-05T11:38:50,722  INFO [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] conf.MetastoreConf: Unable to find config file metastore-site.xml
2018-09-05T11:38:50,722  INFO [826ec55c-7fca-4fff-baa5-b5a010e5af89 main] conf.MetastoreConf: Found configuration file null

蜂巢站点.xml

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:derby:;databaseName=metastore_db;create=true</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>

.profile 中的环境变量(我正在尝试配置由其他人完成的安装,因此即使手动运行 Hadoop,环境变量也设置在 .profile 而不是 .bashrc 中)

#HIVE
export HIVE_HOME=/usr/local/apache-hive-3.1.0-bin
export HIVE_CONF_DIR=/usr/local/apache-hive-3.1.0-bin/conf
export PATH=$HIVE_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.
export CLASSPATH=$CLASSPATH:/usr/local/apache-hive-3.1.0-bin/lib/*:.

#DERBY
DERBY_HOME=/usr/local/db-derby-10.14.2.0-bin
export PATH=$PATH:$DERBY_HOME/bin
export CLASSPATH=$CLASSPATH:$DERBY_HOME/lib/derby.jar:$DERBY_HOME/lib/derbytool$

错误消息表明我没有遇到配置单元配置的 metastore-site.xml 文件。

18.04 hadoop hive
  • 1 个回答
  • 37249 Views
Martin Hope
Aashish Kumar
Asked: 2018-07-11 10:34:43 +0800 CST

在 HIVE 3.0.0 中发生了 JNI 错误

  • 0

我成功安装了 Hadoop,它工作正常,但是当我安装 Hive 并hive在终端中运行命令时,我收到了这个错误

Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 64
    at java.util.jar.JarFile.match(java.base@9-internal/JarFile.java:983)
    at java.util.jar.JarFile.checkForSpecialAttributes(java.base@9-internal/JarFile.java:1017)
    at java.util.jar.JarFile.isMultiRelease(java.base@9-internal/JarFile.java:399)
    at java.util.jar.JarFile.getEntry(java.base@9-internal/JarFile.java:524)
    at java.util.jar.JarFile.getJarEntry(java.base@9-internal/JarFile.java:480)
    at jdk.internal.util.jar.JarIndex.getJarIndex(java.base@9-internal/JarIndex.java:114)
    at jdk.internal.loader.URLClassPath$JarLoader$1.run(java.base@9-internal/URLClassPath.java:640)
    at jdk.internal.loader.URLClassPath$JarLoader$1.run(java.base@9-internal/URLClassPath.java:632)
    at java.security.AccessController.doPrivileged(java.base@9-internal/Native Method)
    at jdk.internal.loader.URLClassPath$JarLoader.ensureOpen(java.base@9-internal/URLClassPath.java:631)
    at jdk.internal.loader.URLClassPath$JarLoader.<init>(java.base@9-internal/URLClassPath.java:606)
    at jdk.internal.loader.URLClassPath$3.run(java.base@9-internal/URLClassPath.java:386)
    at jdk.internal.loader.URLClassPath$3.run(java.base@9-internal/URLClassPath.java:376)
    at java.security.AccessController.doPrivileged(java.base@9-internal/Native Method)
    at jdk.internal.loader.URLClassPath.getLoader(java.base@9-internal/URLClassPath.java:375)
    at jdk.internal.loader.URLClassPath.getLoader(java.base@9-internal/URLClassPath.java:352)
    at jdk.internal.loader.URLClassPath.getResource(java.base@9-internal/URLClassPath.java:218)
    at jdk.internal.loader.BuiltinClassLoader$3.run(java.base@9-internal/BuiltinClassLoader.java:463)
    at jdk.internal.loader.BuiltinClassLoader$3.run(java.base@9-internal/BuiltinClassLoader.java:460)
    at java.security.AccessController.doPrivileged(java.base@9-internal/Native Method)
    at jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(java.base@9-internal/BuiltinClassLoader.java:459)
    at jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(java.base@9-internal/BuiltinClassLoader.java:406)
    at jdk.internal.loader.BuiltinClassLoader.loadClass(java.base@9-internal/BuiltinClassLoader.java:364)
    at jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(java.base@9-internal/ClassLoaders.java:184)
    at java.lang.ClassLoader.loadClass(java.base@9-internal/ClassLoader.java:419)
    at sun.launcher.LauncherHelper.loadMainClass(java.base@9-internal/LauncherHelper.java:585)
    at sun.launcher.LauncherHelper.checkAndLoadMain(java.base@9-internal/LauncherHelper.java:497)
Unable to determine Hadoop version information.
'hadoop version' returned:
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 64
    at java.util.jar.JarFile.match(java.base@9-internal/JarFile.java:983)
    at java.util.jar.JarFile.checkForSpecialAttributes(java.base@9-internal/JarFile.java:1017)
    at java.util.jar.JarFile.isMultiRelease(java.base@9-internal/JarFile.java:399)
    at java.util.jar.JarFile.getEntry(java.base@9-internal/JarFile.java:524)
    at java.util.jar.JarFile.getJarEntry(java.base@9-internal/JarFile.java:480)
    at jdk.internal.util.jar.JarIndex.getJarIndex(java.base@9-internal/JarIndex.java:114)
    at jdk.internal.loader.URLClassPath$JarLoader$1.run(java.base@9-internal/URLClassPath.java:640)
    at jdk.internal.loader.URLClassPath$JarLoader$1.run(java.base@9-internal/URLClassPath.java:632)
    at java.security.AccessController.doPrivileged(java.base@9-internal/Native Method)
    at jdk.internal.loader.URLClassPath$JarLoader.ensureOpen(java.base@9-internal/URLClassPath.java:631)
    at jdk.internal.loader.URLClassPath$JarLoader.<init>(java.base@9-internal/URLClassPath.java:606)
    at jdk.internal.loader.URLClassPath$3.run(java.base@9-internal/URLClassPath.java:386)
    at jdk.internal.loader.URLClassPath$3.run(java.base@9-internal/URLClassPath.java:376)
    at java.security.AccessController.doPrivileged(java.base@9-internal/Native Method)
    at jdk.internal.loader.URLClassPath.getLoader(java.base@9-internal/URLClassPath.java:375)
    at jdk.internal.loader.URLClassPath.getLoader(java.base@9-internal/URLClassPath.java:352)
    at jdk.internal.loader.URLClassPath.getResource(java.base@9-internal/URLClassPath.java:218)
    at jdk.internal.loader.BuiltinClassLoader$3.run(java.base@9-internal/BuiltinClassLoader.java:463)
    at jdk.internal.loader.BuiltinClassLoader$3.run(java.base@9-internal/BuiltinClassLoader.java:460)
    at java.security.AccessController.doPrivileged(java.base@9-internal/Native Method)
    at jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(java.base@9-internal/BuiltinClassLoader.java:459)
    at jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(java.base@9-internal/BuiltinClassLoader.java:406)
    at jdk.internal.loader.BuiltinClassLoader.loadClass(java.base@9-internal/BuiltinClassLoader.java:364)
    at jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(java.base@9-internal/ClassLoaders.java:184)
    at java.lang.ClassLoader.loadClass(java.base@9-internal/ClassLoader.java:419)
    at sun.launcher.LauncherHelper.loadMainClass(java.base@9-internal/LauncherHelper.java:585)
    at sun.launcher.LauncherHelper.checkAndLoadMain(java.base@9-internal/LauncherHelper.java:497)

我的.bashrc文件是:

#Hadoop variables
export JAVA_HOME=/usr/lib/jvm/java-9-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
#end of Hadoop variable declaration


#HIVE variables
export HIVE_HOME=/usr/lib/hive
export HIVE_CONF_DIR=/usr/lib/hive/conf
export PATH=$PATH:$HIVE_HOME/bin
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:
export CLASSPATH=$CLASSPATH:/usr/local/hive/lib/*:

我已经坚持了一周,找不到有效的解决方案。

apache2 java hadoop hive
  • 1 个回答
  • 478 Views
Martin Hope
Ted Cahall
Asked: 2018-03-08 05:15:27 +0800 CST

在我家创建低成本的基于 Ubuntu 的 Cassandra 或 Hadoop 集群的好方法是什么?

  • 3

我有一系列“旧”数据中心服务器(我在公司升级时购买的),用于在家中运行 Cassandra 和 Hadoop 集群。它们声音很大,很耗电,而且在我的地下室里占据了很大的空间。

除了每月向 AWS 支付一堆 EC2 节点的账单外,还有什么更好的方式来拥有我的家庭数据中心集群?

server hadoop cassandra
  • 1 个回答
  • 225 Views
Martin Hope
Amit Singla
Asked: 2018-02-19 06:25:07 +0800 CST

在 hadoop 生态系统中将文件复制到 HDFS 时出错

  • 1

在 Hadoop 3.0 中发出命令将文件从本地文件系统复制到终端上的 HDFS 时,它显示错误

hadoop-3.0.0/hadoop2_data/hdfs/datanode': No such file or directory: 
`hdfs://localhost:9000/user/Amit/hadoop-3.0.0/hadoop2_data/hdfs/datanode.

但是,我检查了目录hadoop-3.0.0/hadoop2_data/hdfs/datanode是否存在并具有适当的访问权限。我尝试从 Web 浏览器上传文件,它显示以下错误。

"Couldn't find datanode to write file. Forbidden"

请帮助解决问题。附加core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/Amit/hadoop-3.0.0/hadoop2_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value>file:/home/Amit/hadoop-3.0.0/hadoop2_data/hdfs/datanode</value>
</property>
</configuration>

检查了 Hadoop 安装目录中的 Datanode 日志文件,它显示成功消息为

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/tmp/hadoop-Amit/dfs/data
INFO org.apache.commons.beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.
INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is DESKTOP-JIUFBOR.localdomain
INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:9866
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwidth is 10485760 bytes/s
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 50
INFO org.eclipse.jetty.util.log: Logging initialized @146677ms
INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 49833
INFO org.eclipse.jetty.server.Server: jetty-9.3.19.v20170502
INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@3a0baae5{/logs,file:///home/Amit/hadoop-3.0.0/logs/,AVAILABLE}
INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@289710d9{/static,file:///home/Amit/hadoop-3.0.0/share/hadoop/hdfs/webapps/static/,AVAILABLE}
INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@3016fd5e{/,file:///home/Amit/hadoop-3.0.0/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{/datanode}
INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@178213b{HTTP/1.1,[http/1.1]}{localhost:49833}
INFO org.eclipse.jetty.server.Server: Started @151790ms
INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:9864
INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = Amit
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9867
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:9867
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service
INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9867: starting
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-Amit/dfs/data/in_use.lock acquired by nodename [email protected]
INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-1751678544-127.0.1.1-1518974872649
INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /tmp/hadoop-Amit/dfs/data/current/BP-1751678544-127.0.1.1-1518974872649
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1436602813;bpid=BP-1751678544-127.0.1.1-1518974872649;lv=-57;nsInfo=lv=-64;cid=CID-b7086125-1e01-4cf4-94d0-f8b6b1d4db25;nsid=1436602813;c=1518974872649;bpid=BP-1751678544-127.0.1.1-1518974872649;dnuuid=f132f3ae-7f95-424d-b4d0-729602fc80dd
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-ba9d49d2-87cb-4dff-ae80-d7f11382644f
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - [DISK]file:/tmp/hadoop-Amit/dfs/data, StorageType: DISK
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for /tmp/hadoop-Amit/dfs/data
INFO org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker: Scheduled health check for volume /tmp/hadoop-Amit/dfs/data
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1751678544-127.0.1.1-1518974872649
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1751678544-127.0.1.1-1518974872649 on volume /tmp/hadoop-Amit/dfs/data...
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1751678544-127.0.1.1-1518974872649 on /tmp/hadoop-Amit/dfs/data: 1552ms
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1751678544-127.0.1.1-1518974872649: 1597ms
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1751678544-127.0.1.1-1518974872649 on volume /tmp/hadoop-Amit/dfs/data...
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice: Replica Cache file: /tmp/hadoop-Amit/dfs/data/current/BP-1751678544-127.0.1.1-1518974872649/current/replicas doesn't exist 
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1751678544-127.0.1.1-1518974872649 on volume /tmp/hadoop-Amit/dfs/data: 1ms
INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 4ms
INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/tmp/hadoop-Amit/dfs/data, DS-ba9d49d2-87cb-4dff-ae80-d7f11382644f): no suitable block pools found to scan.  Waiting 1811581849 ms.
INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 2/18/18 8:15 PM with interval of 21600000ms
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1751678544-127.0.1.1-1518974872649 (Datanode Uuid f132f3ae-7f95-424d-b4d0-729602fc80dd) service to localhost/127.0.0.1:9000 beginning handshake with NN
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1751678544-127.0.1.1-1518974872649 (Datanode Uuid f132f3ae-7f95-424d-b4d0-729602fc80dd) service to localhost/127.0.0.1:9000 successfully registered with NN
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/127.0.0.1:9000 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xe646383a22bd4be5,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 9 msec to generate and 834 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
ubuntu-one localhost hadoop
  • 1 个回答
  • 5754 Views
Martin Hope
Amit Singla
Asked: 2018-02-12 05:46:27 +0800 CST

hadoop namenode - 格式显示错误

  • 1

我正在尝试在我的 Ubuntu 桌面上设置 Hadoop 3.0.0。我已经完成了所有必需的设置,但是在运行命令./hadoop namenode -format格式化名称时它显示错误:

root@DESKTOP-JIUFBOR:/usr/local/hadoop/hadoop-3.0.0/bin# ./hadoop namenode -format
WARNING: Use of this script to execute namenode is deprecated.
WARNING: Attempting to execute replacement "hdfs namenode" instead.

Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/hdfs/server/namenode/NameNode : Unsupported major.minor version 52.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
14.04 hadoop
  • 1 个回答
  • 850 Views
Martin Hope
curious
Asked: 2016-08-21 04:50:52 +0800 CST

namenode 没有运行...我已经尝试过 sudo chown -R username /usr/local/hadoop/ 请帮忙

  • 2
$ /usr/local/hadoop/sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 
Starting namenodes on [localhost] 
divyeshlad@localhost's password:  
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-divyeshlad-namenode-divyeshlad-VirtualBox.out
localhost: chown: changing ownership of '/usr/local/hadoop/logs': Operation not permitted
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-divyeshlad-namenode-divyeshlad-VirtualBox.out: Permission denied
localhost: head: cannot open '/usr/local/hadoop/logs/hadoop-divyeshlad-namenode-divyeshlad-VirtualBox.out' for reading: No such file or directory
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-divyeshlad-namenode-divyeshlad-VirtualBox.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-divyeshlad-namenode-divyeshlad-VirtualBox.out: Permission denied
hadoop
  • 1 个回答
  • 1346 Views
Martin Hope
Sankalp
Asked: 2015-03-21 13:47:30 +0800 CST

节点 /hbase-unsecure 不在 ZooKeeper 中。检查“zookeeper.znode.parent”中配置的值

  • 2

在我的 ubuntu 机器上启动独立 hBase 时出现此错误。请帮忙。花了很多时间让它运行起来。:(

到目前为止我检查过的 -

  1. /etc/hosts 包含本地主机 127.0.0.1
  2. HBase:hbase-0.98.3-hadoop2-bin.tar.gz
  3. Hadoop:hadoop-2.6.0.tar.gz
  4. 我的 hbase-site.xml 中已经有节点 /hbase-unsecure。

当我尝试运行命令时 - create 'usertable', 'resultfamily'

它给了我以下异常 -

错误:节点 /hbase-unsecure 不在 ZooKeeper 中。应该是大师写的。检查“zookeeper.znode.parent”中配置的值。可能与 master 中配置的不匹配。

<configuration>
  <property>
    <name>hbase.rootdir</name>    
    <value>hdfs://localhost:54310/hbase</value>
  </property>

  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/hduser/zookeeper</value>
  </property>

  <property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2181</value>
      <description>Property from ZooKeeper's config zoo.cfg.
      The port at which the clients will connect.
      </description>
  </property>

  <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>

   <property>
        <name>zookeeper.znode.parent</name>
        <value>/hbase-unsecure</value>
    </property>

    <property>
      <name>hbase.zookeeper.quorum</name>
      <value>localhost</value>
      <description>Comma separated list of servers in the ZooKeeper Quorum.
      </description>
    </property>

  <property>
         <name>dfs.replication</name>
         <value>1</value>
    </property>

  <property>
        <name>hbase.master</name> 
        <value>hadoop-master:60000</value>
  </property>

</configuration>
hadoop
  • 1 个回答
  • 9673 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    如何运行 .sh 脚本?

    • 16 个回答
  • Marko Smith

    如何安装 .tar.gz(或 .tar.bz2)文件?

    • 14 个回答
  • Marko Smith

    如何列出所有已安装的软件包

    • 24 个回答
  • Marko Smith

    无法锁定管理目录 (/var/lib/dpkg/) 是另一个进程在使用它吗?

    • 25 个回答
  • Martin Hope
    Flimm 如何在没有 sudo 的情况下使用 docker? 2014-06-07 00:17:43 +0800 CST
  • Martin Hope
    Ivan 如何列出所有已安装的软件包 2010-12-17 18:08:49 +0800 CST
  • Martin Hope
    La Ode Adam Saputra 无法锁定管理目录 (/var/lib/dpkg/) 是另一个进程在使用它吗? 2010-11-30 18:12:48 +0800 CST
  • Martin Hope
    David Barry 如何从命令行确定目录(文件夹)的总大小? 2010-08-06 10:20:23 +0800 CST
  • Martin Hope
    jfoucher “以下软件包已被保留:”为什么以及如何解决? 2010-08-01 13:59:22 +0800 CST
  • Martin Hope
    David Ashford 如何删除 PPA? 2010-07-30 01:09:42 +0800 CST

热门标签

10.10 10.04 gnome networking server command-line package-management software-recommendation sound xorg

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve