Now I have some Spark applications which store output to HDFS.
Since our hadoop cluster is consisting of namenode H/A, and spark cluster is outside of hadoop cluster (I know it is something bad) I need to specify HDFS URI to application so that it can access HDFS.
But it doesn't recognize name service so I can only give one of namenode's URI, and if it fails, modify configuration file and try again.
Accessing Zookeeper for revealing active seems to very annoying, so I'd like to avoid.
Could you suggest any alternatives?
hadoop dfsadmin -report
to get the status. – Geer