Using a connector with Helm-installed Kafka/Confluent
Asked Answered
S

5

7

I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:

helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360

The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).

Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:

enter image description here

I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:

Install your connector

Use the Confluent Hub client to install this connector with:

confluent-hub install debezium/debezium-connector-mysql:0.9.2

Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.

Questions:

  1. Does confluent-hub not come installed via those Helm charts?
  2. Do I have to install confluent-hub myself?
  3. If so, which pod do I have to install it on?
Snide answered 1/4, 2019 at 19:46 Comment(2)
Shot in the dark, but shouldn’t the connector be installed in the kafka-connect pod? You also may want to take a look at Strimzi. It provides a kube native way to roll Kafka clusters on k8s.Mozambique
See "Install a Kafka Connect plugin automatically" -- rmoff.net/2018/12/15/docker-tips-and-tricks-with-ksql-and-kafka i.e. Change the command of the Connect containerHide
H
7

Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.

Contents of Dockerfile

FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java

/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.

Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.

Here is the path to the values.yaml file. You can find the image and plugin.path values here.

Hamitic answered 8/5, 2019 at 21:59 Comment(0)
S
3

Just an add-on to Jegan's comment above: https://mcmap.net/q/1437503/-using-a-connector-with-helm-installed-kafka-confluent

You can choose to do the Dockerfile below. Recommended.

FROM confluentinc/cp-server-connect-operator:5.4.0.0

RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0

Or you can use a Docker's multi-stage build instead.

FROM confluentinc/cp-server-connect-operator:5.4.0.0

COPY --from=debezium/connect:1.0 \
    /kafka/connect/debezium-connector-postgres/ \
    /usr/share/confluent-hub-components/debezium-connector-postgres/

This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.

From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors

Sofko answered 6/2, 2020 at 0:0 Comment(0)
U
2

The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.

Ulita answered 1/4, 2019 at 21:43 Comment(2)
If the pod restarts, that connector will be gone, thoughHide
Ignoring @cricket_007's question for the moment (although that might be problematic) - when I run bash on that pod, I enter confluent-hub as a command, and I get "command not found"Snide
U
1

The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:

kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
Unclear answered 10/12, 2019 at 12:33 Comment(0)
D
0

As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH

See README.md

Script can be passed as a secret and mounted as a volume

Disequilibrium answered 13/5, 2020 at 14:41 Comment(1)
may you extend your answer? I feel this is the correct track but cant manage to do alone. Thanks!Cuttlefish

© 2022 - 2024 — McMap. All rights reserved.