Does Kafka python API support stream processing?
Asked Answered
L

4

28

I have used Kafka Streams in Java. I could not find similar API in python. Do Apache Kafka support stream processing in python?

Ladylike answered 19/8, 2018 at 14:59 Comment(2)
There is github.com/wintoncode/winton-kafka-streams -- this is not part of Apache Kafka. I don't know how stable it is and if it's suitable for production yet.Potence
And there is also github.com/robinhood/faustAtaman
S
40

Kafka Streams is only available as a JVM library, but there are a few comparable Python implementations of it

In theory, you could try playing with Jython or Py4j to work with the JVM implementation, but probably would require more work than necessary.

Outside of those options, you can also try Apache Beam, Flink or Spark, but they each require an external cluster scheduler to scale out (and also require a Java installation).

If you are okay with HTTP methods, then running a KSQLDB instance (again, requiring Java for that server) and invoking its REST interface from Python with the built-in SQL functions can work. However, building your own functions there will requiring writing JVM compiled code, last I checked.

If none of those options are suitable, then you're stuck with the basic consumer/producer methods.

Scevo answered 19/8, 2018 at 15:59 Comment(5)
Is there any example or tutorials to use docs.confluent.io/current/ksql/docs/tutorials/… with faust streaming?Cluny
KSQL is implemented in Java, so I'm not sure I understand the questionScevo
@circket_007, KSQL is not available in python. This is what you mean. Am I right?Cluny
@Maha KSQL server has a REST API, so you can submit queries from any languageScevo
btw: here is the direct link to the forked project: github.com/faust-streaming/faustMachinate
C
6

If you are using Apache Spark, you can use Kafka as producer and Spark Structured Streaming as consumer. No need to rely on 3rd part libraries like Faust.

To consume Kafka data streams in Spark, use the Structured Streaming + Kafka Integration Guide.

Keep in mind that you will have to append spark-sql-kafka package when using spark-submit:

spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 StructuredStreaming.py

This solution has been tested with Spark 3.0.1 and Kafka 2.7.0 with PySpark.

This resource can also be useful.

Curley answered 21/3, 2021 at 14:29 Comment(1)
And if you're a Python person, you can write the originating code (the code that is, for example, probing a vibration sensor) in Python, and use either the Kafka Python library to publish messages directly, or fluentd to publish JSON provided by a Python scriptWeeden
A
1

Previously KStrame python API was not available but now its available with new KStream python library https://pypi.org/project/kstreams/

Features:

  1. Produce events
  2. Consumer events with Streams
  3. Prometheus metrics and custom monitoring
  4. TestClient
  5. Custom Serialization and Deserialization
  6. Easy to integrate with any async framework. No tied to any library!!
  7. Yield events from streams
  8. Store (kafka streams pattern)
  9. Stream Join
  10. Windowing
Appalling answered 2/1, 2023 at 16:38 Comment(1)
Those last three features are not implemented, according to the docsScevo
M
0

There is a relatively new library called FastStream:

FastStream is a powerful and easy-to-use Python framework for building asynchronous services interacting with event streams such as Apache Kafka, RabbitMQ, NATS and Redis.

It looks really good and quite simple to use, although I haven't personally used it yet. It is in constant development and supports more brokers besides Kafka (like RabbitMQ).

Madelyn answered 12/12, 2023 at 18:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.