争怎路由网:是一个主要分享无线路由器安装设置经验的网站,汇总WiFi常见问题的解决方法。

hadoop搭建3节点集群,遇到Live Nodes显示为0时处理方法

时间:2024/6/11作者:未知来源:争怎路由网人气:

Linux,全称GNU/Linux,是一种免费使用和自由传播的类UNIX操作系统,其内核由林纳斯·本纳第克特·托瓦兹于1991年10月5日首次发布,它主要受到Minix和Unix思想的启发,是一个基于POSIX的多用户、多任务、支持多线程和多CPU的操作系统。
首先,在搭建Hadoop 的3节点集群时,安装基本的步骤,配置好以下几个文件
1.core-site.xml
2.hadoop-env.sh
3.hdfs-site.xml
4.yarn-env.sh
5.yarn-site.xml
6.slaves
之后就是格式化NameNode节点,
[root@spark1 hadoop]# hdfs namenode -format
启动hdfs集群
[root@spark1 hadoop]# start-dfs.sh
 
查询各个节点是否运行成功。
spark1 :
[root@spark1 hadoop]# jps
5575 SecondaryNameNode
5722 Jps
5443 DataNode
5336 NameNode
spark2:
[root@spark2 hadoop]# jps
1859 Jps
1795 DataNode
spark3:
[root@spark3 ~]# jps
1748 DataNode
1812 Jps
 
集群搭建过程核心文件配置没问题,可是,就是在使用50070端口检测的时候,显示livenode为1,而且只是spark1!
于是,经过对问题的排查,发现,最终因为前期配置/etc/ hosts 的时候,配置分别为:
spark1:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.111  spark1
spark2:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.112  spark2
spark3:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.113  spark3
 
现在,统一改为:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.113  spark3
192.168.30.111  spark1
192.168.30.112  spaqk2
ok ,问题得到解决。
 
利用代码:
[root@spark1 hadoop]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 55609774080 (51.79 GB)
Present Capacity: 47725793280 (44.45 GB)
DFS Remaining: 47725719552 (44.45 GB)
DFS Used: 73728 (72 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)
Live datanodes:
Name: 192.168.30.111:50010 (spark1)
Hostname: spark1
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2628579328 (2.45 GB)
DFS Remaining: 15907987456 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.82%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:06 CST 2017
Name: 192.168.30.113:50010 (spark3)
Hostname: spark3
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2627059712 (2.45 GB)
DFS Remaining: 15909507072 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.83%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:05 CST 2017
Name: 192.168.30.112:50010 (spark2)
Hostname: spark2
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2628341760 (2.45 GB)
DFS Remaining: 15908225024 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.82%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:05 CST 2017
 
代码部分,可以看出,连接的datanode 为3个。
分别为 192.168.30.111
192.168.30.112
192.168.30.113

Linux是一套免费使用和自由传播的类Unix操作系统



关键词:hadoop搭建3节点集群,遇到Live Nodes显示为0时处理办法




Copyright © 2012-2018 争怎路由网(http://www.zhengzen.com) .All Rights Reserved 网站地图 友情链接

免责声明:本站资源均来自互联网收集 如有侵犯到您利益的地方请及时联系管理删除,敬请见谅!

QQ:1006262270   邮箱:kfyvi376850063@126.com   手机版