首页 > 解决方案 > Kafka Message Keys with Composite Values

问题描述

I am working on a system that will produce kafka messages. These messages will be organized into topics that more or less represent database tables. Many of these tables have composite keys and this aspect of the design is out of my control. The goal is to prepare these messages in a way that they can be easily consumed by common sink connectors, without a lot of manipulation.

I will be using the schema registry and avro format for all of the obvious advantages. Having the entire "row" expressed as a record in the message value is fine for upsert operations, but I also need to support deletes. From what I can tell, this means my message needs a key so I can have "tombstone" messages. Also keep in mind that I want to avoid any sort of transforms unless absolutely necessary.

In a perfect world, the message key would be a "record" that included strongly-typed key-column values and the message value would have the other column values (both controlled by the schema registry). However, it seems like a lot of the tooling around kafka expects message keys to be a single, primitive value. This makes me wonder if I need to compute a key value where I concatenate my multiple key columns into a single string value and keep the individual columns in my message value. Is this right or am I missing something? What other options do I have?

标签: apache-kafkaapache-kafka-connectconfluent-platform

解决方案


我想跟进一个解决我问题的答案:

  1. 我提到的使用连接字符串的策略在技术上是有效的。但是,它肯定不是很优雅。
  2. 我使用结构化密钥的最初问题是我没有使用正确的转换器来反序列化密钥,这导致了其他错误。一旦我使用了 avro 转换器,我就能够获得我的多部分密钥并有效地使用它。
  3. 两者在适当实施时都允许我生成可以表示删除的有效墓碑消息。

推荐阅读