"start-all.sh" and "start-dfs.sh" from master node do not start the slave node services?
Asked Answered
P

1

5

I have updated the /conf/slaves file on the Hadoop master node with the hostnames of my slave nodes, but I'm not able to start the slaves from the master. I have to individually start the slaves, and then my 5-node cluster is up and running. How can I start the whole cluster with a single command from the master node?

Also, SecondaryNameNode is running on all the slaves. Is that a problem? If so, how can I remove them from the slaves? I think there should only be one SecondaryNameNode in a cluster with one NameNode, am I right?

Thank you!

Putnam answered 21/2, 2018 at 16:17 Comment(0)
E
11

In Apache Hadoop 3.0 use $HADOOP_HOME/etc/hadoop/workers file to add slave nodes one per line.

Ethelynethene answered 14/3, 2018 at 9:39 Comment(1)
I have added salve server ip in $HADOOP_HOME/etc/hadoop/workers on master server,when I start-all.sh on the master,it start SecondaryNameNode, ResourceManager,NameNode. But on slave server,it only start NodeManager without DataNode.Hurst

© 2022 - 2024 — McMap. All rights reserved.