Hadoop namenode : Single point of failure
Asked Answered
E

3

25

The Namenode in the Hadoop architecture is a single point of failure.

How do people who have large Hadoop clusters cope with this problem?.

Is there an industry-accepted solution that has worked well wherein a secondary Namenode takes over in case the primary one fails ?

Esperance answered 21/12, 2010 at 17:46 Comment(0)
S
25

Yahoo has certain recommendations for configuration settings at different cluster sizes to take NameNode failure into account. For example:

The single point of failure in a Hadoop cluster is the NameNode. While the loss of any other machine (intermittently or permanently) does not result in data loss, NameNode loss results in cluster unavailability. The permanent loss of NameNode data would render the cluster's HDFS inoperable.

Therefore, another step should be taken in this configuration to back up the NameNode metadata

Facebook uses a tweaked version of Hadoop for its data warehouses; it has some optimizations that focus on NameNode reliability. Additionally to the patches available on github, Facebook appears to use AvatarNode specifically for quickly switching between primary and secondary NameNodes. Dhruba Borthakur's blog contains several other entries offering further insights into the NameNode as a single point of failure.

Edit: Further info about Facebook's improvements to the NameNode.

Snip answered 21/12, 2010 at 19:51 Comment(0)
N
7

High Availability of Namenode has been introduced with Hadoop 2.x release.

It can be achieved in two modes - With NFS and With QJM

But high availability with Quorum Journal Manager (QJM) is preferred option.

In a typical HA cluster, two separate machines are configured as NameNodes. At any point in time, exactly one of the NameNodes is in an Active state, and the other is in a Standby state. The Active NameNode is responsible for all client operations in the cluster, while the Standby is simply acting as a slave, maintaining enough state to provide a fast failover if necessary.

Have a look at below SE questions, which explains complete failover process.

Secondary NameNode usage and High availability in Hadoop 2.x

How does Hadoop Namenode failover process works?

Natashianatassia answered 18/1, 2016 at 8:35 Comment(0)
W
1

Large Hadoop clusters have thousands of data nodes and one name node. The probability of failure goes up linearly with machine count (all else being equal). So if Hadoop didn't cope with data node failures it wouldn't scale. Since there's still only one name node the Single Point of Failure (SPOF) is there, but the probability of failure is still low.

That sad, Bkkbrad's answer about Facebook adding failover capability to the name node is right on.

Ward answered 21/12, 2010 at 22:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.