What's the purpose of Kafka's key/value pair-based messaging?
Asked Answered
E

3

143

All of the examples of Kafka | producers show the ProducerRecord's key/value pair as not only being the same type (all examples show <String,String>), but the same value. For example:

producer.send(new ProducerRecord<String, String>("someTopic", Integer.toString(i), Integer.toString(i)));

But in the Kafka docs, I can't seem to find where the key/value concept (and its underlying purpose/utility) is explained. In traditional messaging (ActiveMQ, RabbitMQ, etc.) I've always fired a message at a particular topic/queue/exchange. But Kafka is the first broker that seems to require key/value pairs instead of just a regulare 'ole string message.

So I ask: What is the purpose/usefulness of requiring producers to send KV pairs?

Egide answered 29/11, 2016 at 17:52 Comment(1)
Conceptually, an event has a key, value, timestamp, and optional metadata headers. Here's an example event: Event key: "Alice" Event value: "Made a payment of $200 to Bob" Event timestamp: "Jun. 25, 2020 at 2:06 p.m."Griseous
S
130

Kafka uses the abstraction of a distributed log that consists of partitions. Splitting a log into partitions allows to scale-out the system.

Keys are used to determine the partition within a log to which a message get's appended to. While the value is the actual payload of the message. The examples are actually not very "good" with this regard; usually you would have a complex type as value (like a tuple-type or a JSON or similar) and you would extract one field as key.

See: http://kafka.apache.org/intro#intro_topics and http://kafka.apache.org/intro#intro_producers

In general the key and/or value can be null, too. If the key is null a random partition will the selected. If the value is null it can have special "delete" semantics in case you enable log-compaction instead of log-retention policy for a topic (http://kafka.apache.org/documentation#compaction).

Soekarno answered 29/11, 2016 at 20:53 Comment(10)
And notably, keys also play a relevant part in the streaming API of Kafka, with KStream and KTable - see here.Kreager
Keys can be used to determine the partition, but it's just a default strategy of the producer. Ultimately, it is the producer who chooses which partition to use.Densitometer
@Densitometer Does the key have more uses?Tackle
It can be used to keep only one instance of a message per key, as mentioned in the log compaction link. I don't know about other use-cases.Densitometer
@Densitometer I thought partitions are hidden from the producers and you find out after you have written to a topic which partition it was written toSteinke
@Steinke By default, this is correct. However, gvo is also correct: the API allows you specify the partition number explicitly.Soekarno
If the key in the constructor is used to select the partition what is the purpose of the "Integer partition" in this constructor public ProducerRecord(java.lang.String topic, java.lang.Integer partition, K key, V value)Capillaceous
If you specify the partition parameter it will be used, and the key will be "ignored" (or course, the key will still be written into the topic). -- This allows you to have a customized partitioning even if you have keys.Soekarno
so "key" in Kafka basically plays the same role as "partition key" in AWS Kinesis?Sewer
By default yes. -- But the behavior can be changed, by either specifying the target partition explicitly, or by providing a custom partitioner that may also use value-data to compute the target partition. -- For compacted topics, the key has also special purpose and act as a "id".Soekarno
C
36

Late addition... Specifying the key so that all messages on the same key go to the same partition is very important for proper ordering of message processing if you will have multiple consumers in a consumer group on a topic.

Without a key, two messages on the same key could go to different partitions and be processed by different consumers in the group out of order.

Campaign answered 18/10, 2019 at 12:1 Comment(0)
G
-5

Another interesting use case

We could use the key attribute in Kafka topics for sending user_ids and then can plug in a consumer to fetch streaming events (events stored in value attributes). This could allow you to process any max-history of user event sequences for creating features in your machine learning models.

I still have to find out if this is possible or not. Will keep updating my answer with further details.

Ga answered 29/1, 2020 at 3:39 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.