How to configure hosts file for Hadoop ecosystem
Asked Answered
M

3

6

The question may seem pretty obvious, but I have faced it many times, due to bad configuration of hosts file on a hadoop cluster.

Can anyone describe how to setup hosts file and other related network configuration for hadoop and similar environment usage (like cloudera).

Specially when i have to add both the hostname and FQDN

Update

Here is the host file of one of the machine from host name cdh4hdm have role of hadoop Master

 127.0.0.1       cdh4hdm        localhost
  #127.0.1.1      cdh4hdm 

 # The following lines are desirable for IPv6 capable hosts

   172.26.43.40    cdh4hdm.imp.co.in            kdc1
   172.26.43.41    cdh4hbm.imp.co.in   
   172.26.43.42    cdh4s1.imp.co.in    
   172.26.43.43    cdh4s2.imp.co.in    
   172.26.43.44    cdh4s3.imp.co.in    
   172.26.43.45    cdh4s4.imp.co.in    

   ::1     ip6-localhost ip6-loopback
   fe00::0 ip6-localnet
   ff00::0 ip6-mcastprefix
   ff02::1 ip6-allnodes
   ff02::2 ip6-allrouters 

Please see image attached

Here on cluster some nodes are getting FQDN and some are getting hostname.

Also IP of hostname is not proper and showing 127.0.0.1 instead of host IP

Please suggest

Menopause answered 5/3, 2014 at 9:5 Comment(2)
Do you mean the /etc/hosts file?Invagination
@Invagination please see updated question with details if you need any other details feel free to askMenopause
M
8

For UBUNTU

Hosts File and other configuration for Hadoop Cluster

Provide hostname to all cluster machines, to do so add hostname in /etc/hostname file as

hostname-of-machine

On all the host, hosts file should be like this:

hosts

127.0.0.1       localhost
#127.0.1.1      localhost

<ip of host>    FQDN                hostname    other_name
172.26.43.10    cdh4hdm.domain.com  cdh4hdm     kdc1
172.26.43.11    cdh4hbm.domain.com  cdh4hbm
172.26.43.12    cdh4s1.domain.com   cdh4s1
172.26.43.13    cdh4s2.domain.com   cdh4s2
172.26.43.14    cdh4s3.domain.com   cdh4s3
172.26.43.15    cdh4s4.domain.com   cdh4s4

Note: Make sure to comment line 127.0.1.1 localhost it may create problem in zookeeper and cluster.

Add DNS server IP in /etc/resolv.conf

resolve.conf

search domain.com
nameserver 10.0.1.1

to verify configuration check hostfile and your should be able to ping all the machines by their hostname

To check hostname and FQDN on all machines run following commands:

hostname        //should return the hostname
hostname -f     //Fully Qualified Hostname
hostname -d     //Domain name

All commands will be same for RHEL except the hostname.

Source1 and Source2

Menopause answered 13/3, 2014 at 10:3 Comment(0)
I
4

If you mean the /etc/hosts file, then here is how I have set it in my hadoop cluster:

127.0.0.1       localhost
192.168.0.5     master
192.168.0.6     slave1
192.168.0.7     slave2
192.168.0.18    slave3
192.168.0.3     slave4
192.168.0.4     slave5  nameOfCurrentMachine

, where nameOfCurrentMachine is the machine that this file is set, used as slave5. Some people say that the first line should be removed, but I have not faced any issues, nor have I tried removing it.

Then, the $HADOOP_CONF_DIR/masters file in the master node should be:

master

and the $HADOOP_CONF_DIR/slaves file in the master node should be:

slave1
slave2
slave3
slave4
slave5

In every other node, I have simply set these two files to contain just:

localhost

You should also make sure that you can ssh from master to every other node (using its name, not its IP) without a password. This post describes how to achieve that.

Invagination answered 5/3, 2014 at 9:28 Comment(4)
it worked out when i use only hostname. what about FQDN?Menopause
I have never tried something else than what I describe here. To which part are you referring? To nameOfCurrentMachine, or the IP?Invagination
i have FQDN for all nodes but on cluster some of node comming with only hostname and other are FQDN of hostsMenopause
Can you give an example of what names you have available and what you tried? Perhaps update your question with this example.Invagination
C
-1

​Keep slaves hosts file as

127.0.0.1 localhost

Keep master host file as

private ip master
private ip slave1
private ip slave2
Cardin answered 28/12, 2017 at 6:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.