Kafka Connect can't find connector
Asked Answered
L

4

9

I'm trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error:

[2018-11-21 14:48:29,096] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:108)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector , available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='1.0.1', encodedVersion=1.0.1, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}

I've got a build for the plugin unzipped in a kafka subfolder, and have the following line in connect-standalone.properties:

plugin.path=/opt/kafka/plugins/kafka-connect-elasticsearch-5.0.1/src/main/java/io/confluent/connect/elasticsearch

I can see the various connectors inside that folder, but Kafka Connect does not load them; but it does load the standard connectors, like this:

[2018-11-21 14:56:28,258] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:136)
[2018-11-21 14:56:28,259] INFO Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
[2018-11-21 14:56:28,260] INFO Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)

How can I properly register the connectors?

Linearity answered 21/11, 2018 at 13:1 Comment(0)
E
7

I ran jdbc connector yesterday manually on kafka in docker without confluent platform etc just to learn how those things works underneath. I did not have to build jar on my side or anyhing like this. Hopefully it will be relevant for you - what I did is ( I will skip docker parts howto mount dir with connector etc ):

  • download connector from https://www.confluent.io/connector/kafka-connect-jdbc/, unpack zip
  • put contents of zip to directory in path configured in properties file ( shown below in 3rd point ) -

    plugin.path=/plugins
    

    so tree looks something like this:

    /plugins/
    └── jdbcconnector
        └──assets
        └──doc
        └──etc
        └──lib
    

    Note the lib dir where are the dependencies are, one of them is kafka-connect-jdbc-5.0.0.jar

  • Now you can try to run connector

    ./connect-standalone.sh connect-standalone.properties jdbc-connector-config.properties
    

    connect-standalone.properties are common properties needed for kafka-connect, in my case:

    bootstrap.servers=localhost:9092
    key.converter=org.apache.kafka.connect.json.JsonConverter
    value.converter=org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable=true
    value.converter.schemas.enable=true
    offset.storage.file.filename=/tmp/connect.offsets
    offset.flush.interval.ms=10000
    plugin.path=/plugins
    rest.port=8086
    rest.host.name=127.0.0.1
    

    jdbc-connector-config.properties is more involving, as it's just configuration for this particular connector, you need to dig into connector docs - for jdbc source it is https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html

Elliotelliott answered 22/11, 2018 at 12:6 Comment(2)
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/driversVulcanize
I keep getting "No such file or directory" when I try to execute plugin.path, any idea why? I am totally new into thisLento
R
4

The compiled JAR needs to be available to Kafka Connect. You have a few options here:

  1. Use Confluent Platform, which includes the Elasticsearch (and others) pre-built: https://www.confluent.io/download/. There's zip, rpm/deb, Docker images etc available.

  2. Build the JAR yourself. This typically involves:

    cd kafka-connect-elasticsearch-5.0.1
    mvn clean package
    

    Then take the resulting kafka-connect-elasticsearch-5.0.1.jar JAR and put it in a path as configured in Kafka Connect with plugin.path.

You can find more info on using Kafka Connect here:

Disclaimer: I work for Confluent, and wrote the above blog posts.

Riboflavin answered 21/11, 2018 at 13:23 Comment(3)
Building it myself results in this: [ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}): Linearity
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.Riboflavin
OK, I've saved the pre-built version in /var/confluentinc-kafka-connect-elasticsearch-5.0.0/. In my config, I have this line: plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/ . I'm still getting the same error about no matching connector class.Linearity
V
3

The plugin path must load JAR files, containing compiled code, not raw Java classes of the source code (src/main/java).

It also needs to be the parent directory of other directories which are containing those plug-ins.

plugin.path=/opt/kafka-connect/plugins/

Where

$ ls - lR /opt/kafka-connect/plugins/
kafka-connect-elasticsearch-x.y.z/
    file1.jar
    file2.jar 
    etc

Ref - Manually installing Community Connectors

The Kafka Connect startup scripts in the Confluent Platform automatically (used to?) read all folders that match share/java/kafka-connect-*, too, so that's one way to go. At least, it will continue doing so, if you include the path to the share/java folder of the Confluent package installation in the plugin path as well

If you are not very familiar with Maven, or even if you are, then you actually cannot just clone the Elasticsearch connector repo and build the master branch; it has prerequisites of first Kafka, then the common Confluent repo first. Otherwise, you must checkout a Git tag like 5.0.1-post that matches a Confluent release.

An even simpler option would be to grab the package using Confluent Hub CLI

And if none of that works, just downloading the Confluent Platform and using the Kafka Connect scripts would be the most easiest. This does not imply you need to use the Kafka or Zookeeper configurations from that

Vulcanize answered 21/11, 2018 at 14:32 Comment(4)
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.Linearity
I believe the Elastic connector actually makes a tar.gz file that you need to extract. It doesn't create just one JAR with all the needed classesVulcanize
I'm looking at the target folder, and it created a kafka-connect-elasticsearch-5.0.1.jar file. No tar.gz that I can see.Linearity
For me, adding a comma at the end solved the issue plugin.path=/opt/bitnami/kafka/connectors,. Without the comma, kafka keep complaining failed to find the class.Inordinate
S
1

For me it worked once I put the jars in a subdirectory of the folder that the plugin.path points to.

plugin.path=connectors

connectors/
   plugins/
      kafka-connect-twitter-0.2.26.jar
      .... <the rest of the jars>
Slavish answered 4/3, 2022 at 21:33 Comment(1)
write a readable code pleaseAstronaut

© 2022 - 2024 — McMap. All rights reserved.