hadoop namenode port in use
Asked Answered
G

3

8

This is actually a standby HA namenode. It was configured with the same settings as the primary and hdfs namenode -bootstrapStandby was successfully run. It begins coming up on the standard HTTP port 50070 as defined in the config file:

<property>
  <name>dfs.namenode.http-address.ha-hadoop.namenode2</name>
  <value>namenode2:50070</value>
</property>

The start up begins OK then hits:

15/02/02 08:06:17 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:50070
15/02/02 08:06:17 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
15/02/02 08:06:17 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
15/02/02 08:06:17 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
15/02/02 08:06:17 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
15/02/02 08:06:17 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
15/02/02 08:06:17 INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hadoop1:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
        ... 8 more
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
15/02/02 08:06:17 FATAL namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: hadoop1:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
        ... 8 more
15/02/02 08:06:17 INFO util.ExitUtil: Exiting with status 1
15/02/02 08:06:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1.marketstudies.com/192.168.1.125
************************************************************/

I have tried changing the http-address port by setting:

<property>
  <name>dfs.namenode.http-address.local1-hadoop.hadoop1</name>
  <value>hadoop1:10070</value>
</property>

But then I get the same as above only with the new port:

15/02/02 08:16:51 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070

This is working with the same config on the primary namenode.

This Question seems to be similar to my issue but the Answer didn't help. I tried setting dfs.http.address to other things and it didn't change anything. I belive this is a non-HA config option replaced in HA with dfs.namenode.http-address.ha-name.namenodename

There is nothing actually listening to the http port as can be seen from here:

# netstat -anp |grep LIST
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      946/sshd
tcp        0      0 0.0.0.0:46712           0.0.0.0:*               LISTEN      2066/java
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:8480            0.0.0.0:*               LISTEN      1471/java
tcp        0      0 0.0.0.0:10050           0.0.0.0:*               LISTEN      2358/zabbix_agentd
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:8485            0.0.0.0:*               LISTEN      1471/java
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      2066/java
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      2066/java
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1020/mysqld
tcp6       0      0 :::22                   :::*                    LISTEN      946/sshd

Tried starting as root user to see if it's some kind of perms problem for listening to the port but that gives the same error.

Gangboard answered 2/2, 2015 at 15:29 Comment(0)
G
5

Found the issue. This came from a short history of this server where the IP address changed, but the /etc/hosts file just had the new one appended to it rather than replaced. I think this was confusing the Hadoop start up as it was trying to open 50070 on a non-existent interface. The error being "port in use" made this a little confusing.

Gangboard answered 2/2, 2015 at 20:25 Comment(0)
B
2

download osquery https://code.facebook.com/projects/658950180885092

and install

issue this command osqueryi

when prompt appears use this sql command to see all running java processes and find the pids

SELECT name, path, pid FROM processes where name= "java";

you will get something that looks like this on a mac

  +------+--------------------------------------------------------------------------+-------+
  | name | path                                                                     | pid   |
  +------+--------------------------------------------------------------------------+-------+
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59446 |
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59584 |
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59676 |
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59790 |
  +------+--------------------------------------------------------------------------+-------+

issue the sudo kill - PID command on all the processes to make sure that it kills the port in use 0.0.0.0:50070

then after all this retry the sbin/start-dfs.sh and the namenode should now show

Bis answered 31/12, 2015 at 15:55 Comment(1)
This doesn't show which process has the port. netstat -anp |grep LISTEN will give me all the processes that are listening, this didn't help due to the fact it wasn't a process already running problem, but a messed up /etc/hosts file.Gangboard
B
1

Use the following command for checking the processes which are running and using java:

ps aux | grep java

After that, kill all the processes which are related to hadoop using the PID from above command as follows:

sudo kill -9 PID

Bankruptcy answered 1/6, 2017 at 6:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.