Kafka Confluent error - java.net.BindException: Address already in use
Asked Answered
S

3

8

I am running Kafka via Confluent platform. I have followed the steps as per mentioned, java.net.BindException: Address already in use

As per documentation here, https://docs.confluent.io/2.0.0/quickstart.html#quickstart

start zookeeper,

$ ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties

start kafka,

$ ./bin/kafka-server-start ./etc/kafka/server.properties

next when I run schema-registry command,

$ ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties

I have observed error as, java.net.BindException: Address already in use

I am running all this locally in a macbook. Could somoene please help me to solve this address already in use error?

Console log:

 EFGHS-MER648W:confluent-4.0.0 user$ sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
Password: 
[2018-01-09 13:09:03,510] INFO SchemaRegistryConfig values: 
    metric.reporters = []
    kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
    response.mediatype.default = application/vnd.schemaregistry.v1+json
    kafkastore.ssl.trustmanager.algorithm = PKIX
    authentication.realm = 
    ssl.keystore.type = JKS
    kafkastore.topic = _schemas
    metrics.jmx.prefix = kafka.schema.registry
    kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
    kafkastore.topic.replication.factor = 3
    ssl.truststore.password = [hidden]
    kafkastore.timeout.ms = 500
    host.name = 192.168.0.13
    kafkastore.bootstrap.servers = []
    schema.registry.zk.namespace = schema_registry
    kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
    kafkastore.sasl.kerberos.service.name = 
    schema.registry.resource.extension.class = 
    ssl.endpoint.identification.algorithm = 
    compression.enable = false
    kafkastore.ssl.truststore.type = JKS
    avro.compatibility.level = backward
    kafkastore.ssl.protocol = TLS
    kafkastore.ssl.provider = 
    kafkastore.ssl.truststore.location = 
    response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
    kafkastore.ssl.keystore.type = JKS
    ssl.truststore.type = JKS
    kafkastore.ssl.truststore.password = [hidden]
    access.control.allow.origin = 
    ssl.truststore.location = 
    ssl.keystore.password = [hidden]
    port = 8081
    kafkastore.ssl.keystore.location = 
    master.eligibility = true
    ssl.client.auth = false
    kafkastore.ssl.keystore.password = [hidden]
    kafkastore.security.protocol = PLAINTEXT
    ssl.trustmanager.algorithm = 
    authentication.method = NONE
    request.logger.name = io.confluent.rest-utils.requests
    ssl.key.password = [hidden]
    kafkastore.zk.session.timeout.ms = 30000
    kafkastore.sasl.mechanism = GSSAPI
    kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
    kafkastore.ssl.key.password = [hidden]
    zookeeper.set.acl = false
    schema.registry.inter.instance.protocol = http
    authentication.roles = [*]
    metrics.num.samples = 2
    ssl.protocol = TLS
    schema.registry.group.id = schema-registry
    kafkastore.ssl.keymanager.algorithm = SunX509
    kafkastore.connection.url = localhost:2181
    debug = false
    listeners = [http://0.0.0.0:8081]
    kafkastore.group.id = 
    ssl.provider = 
    ssl.enabled.protocols = []
    shutdown.graceful.ms = 1000
    ssl.keystore.location = 
    ssl.cipher.suites = []
    kafkastore.ssl.endpoint.identification.algorithm = 
    kafkastore.ssl.cipher.suites = 
    access.control.allow.methods = 
    kafkastore.sasl.kerberos.min.time.before.relogin = 60000
    ssl.keymanager.algorithm = 
    metrics.sample.window.ms = 30000
    kafkastore.init.timeout.ms = 60000
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:175)
[2018-01-09 13:09:03,749] INFO Logging initialized @629ms (org.eclipse.jetty.util.log:186)
[2018-01-09 13:09:04,202] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://192.168.0.13:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore:103)
[2018-01-09 13:09:04,475] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore:228)
[2018-01-09 13:09:04,482] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:242)
[2018-01-09 13:09:04,567] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:138)
[2018-01-09 13:09:04,567] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:72)
[2018-01-09 13:09:04,651] INFO Wait to catch up until the offset of the last message at 7 (io.confluent.kafka.schemaregistry.storage.KafkaStore:277)
[2018-01-09 13:09:04,675] INFO Joining schema registry with Zookeeper-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:210)
[2018-01-09 13:09:04,682] INFO Created schema registry namespace localhost:2181/schema_registry (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector:161)
[2018-01-09 13:09:04,705] INFO Successfully elected the new master: {"host":"192.168.0.13","port":8081,"master_eligibility":true,"scheme":"http","version":1} (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector:102)
[2018-01-09 13:09:04,715] INFO Wait to catch up until the offset of the last message at 8 (io.confluent.kafka.schemaregistry.storage.KafkaStore:277)
[2018-01-09 13:09:04,778] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.Application:182)
[2018-01-09 13:09:04,844] INFO jetty-9.2.22.v20170606 (org.eclipse.jetty.server.Server:327)
[2018-01-09 13:09:05,411] INFO HV000001: Hibernate Validator 5.1.3.Final (org.hibernate.validator.internal.util.Version:27)
[2018-01-09 13:09:05,547] INFO Started o.e.j.s.ServletContextHandler@54c62d71{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2018-01-09 13:09:05,555] WARN FAILED NetworkTrafficServerConnector@4879dfad{HTTP/1.1}{0.0.0.0:8081}: java.net.BindException: Address already in use (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
    at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
    at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at org.eclipse.jetty.server.Server.doStart(Server.java:366)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
[2018-01-09 13:09:05,557] WARN FAILED io.confluent.rest.Application$1@388526fb: java.net.BindException: Address already in use (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
    at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
    at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at org.eclipse.jetty.server.Server.doStart(Server.java:366)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
[2018-01-09 13:09:05,558] ERROR Server died unexpectedly:  (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
    at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
    at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at org.eclipse.jetty.server.Server.doStart(Server.java:366)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
O2771-C02K648W:confluent-4.0.0 user$ 

Please help to solve this error.

Thanks,

When I run command, ps aux | grep schema-registry

O2771-C02K648W:~ user$ ps aux | grep schema-registry
root             20888   0.1  1.5  4584980 255588   ??  S     6:14PM   1:05.15 /usr/bin/java -Xmx512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dschema-registry.log.dir=/Users/user/Downloads/confluent-4.0.0/bin/../logs -Dlog4j.configuration=file:/Users/user/Downloads/confluent-4.0.0/bin/../etc/schema-registry/log4j.properties -cp :/Users/user/Downloads/confluent-4.0.0/bin/../package-schema-registry/target/kafka-schema-registry-package-*-development/share/java/schema-registry/*:/Users/user/Downloads/confluent-4.0.0/bin/../share/java/confluent-common/*:/Users/user/Downloads/confluent-4.0.0/bin/../share/java/rest-utils/*:/Users/user/Downloads/confluent-4.0.0/bin/../share/java/schema-registry/* io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain ./etc/schema-registry/schema-registry.properties
root             20887   0.0  0.0  2456112   3452   ??  S     6:14PM   0:00.02 sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
user     25256   0.0  0.0  2432804   1984 s001  S+    1:41PM   0:00.00 grep schema-registry
O2771-C02K648W:~ user$ 
Shebeen answered 9/1, 2018 at 19:19 Comment(11)
Is it possible you have another program running, using the port? Perhaps an older instance of schema-registry? Run ps aux | grep schema-registry to checkMulish
I run that command and the result updated in actual question.Shebeen
After I run bin/schema-registry-stop , seems to be working now.Shebeen
Nice! I think the schema-registry you had ran sometime in the past did not stop and it was occupying the address, so the new schema-registry couldn't use it.Mulish
Do I need to kill like this, sudo kill -9 PID 25256Shebeen
After those commands till scheme-registry, when I run $ bin/kafka-rest-start i am getting config errors again.Shebeen
[2018-01-09 13:51:52,045] ERROR Server died unexpectedly: (io.confluent.kafkarest.KafkaRestMain:63) java.lang.RuntimeException: Atleast one of bootstrap.servers or zookeeper.connect needs to be configuredShebeen
To force kill a process the command is kill -9 [PID], sudo allows you to run commands with higher privileges, so sudo kill -9 25256 will kill the process with PID 25256. But in your case that process is already dead, it's the process that you ran to get the output. ps aux | grep schema-registry basically takes all running processes and prints out their info if 'schema-registry' is in the description. Your search's process had schema-registry in its description. In your case 20888 and 20887 are the schema-registry process IDs (but they won't always be!)Mulish
Let us continue this discussion in chat.Mulish
Yes, now schema-registry is running fine. I am facing new error with rest command, please see the issue here - #48176937Shebeen
That quickstart is really old. Confluent Platform is on 4.0 now : docs.confluent.io/current/quickstart.htmlMatriculate
C
4

You can try to find out which process is using port:8081 by using below commands

  • netstat -vanp tcp | grep 8081

For OSX El Capitan and newer (or if your netstat doesn't support -p), use lsof

  • sudo lsof -i tcp:8081

Or any other command for windows and force kill that process using kill -9 {PID}

You can also try changing schema registry default port to another port which is not used.

Crespi answered 9/1, 2018 at 22:52 Comment(2)
Not always that one can find this. Don't ask me why but what I observed.Transship
You do not need sudo on MacPhila
M
1

Check the Kafka schema registry port using below command

[root@in-ibmibm3718 /]# netstat -tnpl |grep 8093

tcp        0      0 0.0.0.0:8093            0.0.0.0:*               LISTEN      16862/java

[root@in-ibmibm3718 /]# ps -eaf |grep -i 16862

kafka    16862     1 35 14:35 ?        01:15:19 /usr/jdk64/java-1.8.0-openjdk/bin/java -Xmx512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dschema-registry.log.dir=/var/log/kafka -Dlog4j.configuration=file:/opt/IBM/basecamp/basecamp-schema-registry/bin/../etc/schema-registry/log4j.properties -cp :/opt/IBM/basecamp/basecamp-schema-registry/bin/../package-schema-registry/target/kafka-schema-registry-package-*-development/share/java/schema-registry/*:/opt/IBM/basecamp/basecamp-schema-registry/bin/../share/java/confluent-common/*:/opt/IBM/basecamp/basecamp-schema-registry/bin/../share/java/rest-utils/*:/opt/IBM/basecamp/basecamp-schema-registry/bin/../share/java/schema-registry/* io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain /opt/IBM/basecamp/basecamp-schema-registry/etc/schema-registry/schema-registry.properties

Kill the process and start the schema registry again

Mopboard answered 19/6, 2019 at 10:36 Comment(0)
T
-1

I had same issue, just force kill currently working zookeeper process

$ps -ef|grep zookeeper
$kill -9 <process number>

then start again zookeeper and kafka.

Thaliathalidomide answered 3/9, 2018 at 6:2 Comment(1)
The error in the question is about Schema Registry, so finding and killing zookeeper isn't the solution.Bourgeoisie

© 2022 - 2024 — McMap. All rights reserved.