安装hadoop,执行start-all.sh命令遇到的问题

xiangfangwei 发布于 2013/08/02 16:20
阅读 5K+
收藏 0
[root @localhost bin]# ./start-all.sh
starting namenode, logging to /usr/local/hadoop/hadoop-0.20.2/bin/../logs/hadoop-x-namenode-localhost.localdomain.out
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
RSA key fingerprint is 79:c3🇦🇫de:fc:3b:cb:fa:c4:df:86:72:50:15:30:84.
Are you sure you want to continue connecting (yes/no)? yes
127.0.0.1: Warning: Permanently added '127.0.0.1' (RSA) to the list of known hosts.
127.0.0.1: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-datanode-localhost.localdomain.out
datanode01: ssh: Could not resolve hostname datanode01: Name or service not known
127.0.0.1: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.out
namenode: ssh: Could not resolve hostname namenode: Name or service not known
starting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.2/bin/../logs/hadoop-x-jobtracker-localhost.localdomain.out
127.0.0.1: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.2/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out
datanode01: ssh: Could not resolve hostname datanode01: Name or service not known


在masters里我设定为namenode,slaves里设定我datanode01

在编辑/etc/hosts,我想知道,master和slaves的IP的地址怎么给?

加载中
返回顶部
顶部