首页 > 解决方案 > Kafka Streams:如何修复 Serde 转换错误

问题描述

当我使用聚合函数模拟字数统计情况时,我遇到了 Serde 转换问题。

Exception in thread "aggregation-transformation-application-43485635-2d3c-4edc-b13c-c6505a793d18-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Deserialization exception handler is set to fail upon a deserialization error. If you would rather have the streaming pipeline continue after a deserialization error, please set the default.deserialization.exception.handler appropriately.
    at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:80)
    at org.apache.kafka.streams.processor.internals.RecordQueue.maybeUpdateTimestamp(RecordQueue.java:160)
    at org.apache.kafka.streams.processor.internals.RecordQueue.poll(RecordQueue.java:115)
    at org.apache.kafka.streams.processor.internals.PartitionGroup.nextRecord(PartitionGroup.java:100)
    at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:349)
    at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:199)
    at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:420)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:890)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
Caused by: org.apache.kafka.common.errors.SerializationException: Size of data received by IntegerDeserializer is not 4

虽然,我为每个任务定义了 Serdes,但它会抛出 SerializationException。

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.*;
import org.apache.kafka.streams.state.KeyValueStore;

import java.util.Arrays;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;

public class AggregationTransformation {
    public static void main(String[] args) {
        //prepare config
        Properties config = new Properties();
        config.put(StreamsConfig.APPLICATION_ID_CONFIG, "aggregation-transformation-application");
        config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        config.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        config.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

        StreamsBuilder builder = new StreamsBuilder();

        KStream<String, String> kStream = builder.stream("agg-table-source-topic");
        KStream<String, Integer> kStreamFormatted = kStream.flatMapValues((key, value) ->
                Arrays.asList(value.split("\\W+"))).selectKey((key, value) -> value)
                .mapValues(value -> 1);

        kStreamFormatted.groupByKey(Grouped.<String,Integer>as(null)
                .withValueSerde(Serdes.Integer()))
                .aggregate(() -> 0,
                        (aggKey, newValue, aggValue) -> aggValue + newValue,
                        Materialized.<String, Integer, KeyValueStore<Bytes, byte[]>>
                                as("aggregated-stream-store")
                                .withKeySerde(Serdes.String())
                                .withValueSerde(Serdes.Integer())
                ).toStream().to("agg-output-topic", Produced.with(Serdes.String(), Serdes.Integer()));

        Topology topology = builder.build();
        KafkaStreams kafkaStreams = new KafkaStreams(topology, config);

        CountDownLatch countDownLatch = new CountDownLatch(1);

        // attach shutdown handler to catch control-c
        Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
            @Override
            public void run() {
                kafkaStreams.close();
                countDownLatch.countDown();
            }
        });

        try {
            kafkaStreams.start();
            countDownLatch.await();
        } catch (Throwable e) {
            System.exit(1);
        }
        System.exit(0);
    }
}

对于作为“John Smith”进入生产者控制台的第一个条目,我希望输出主题(agg-output-topic)应该有

John 1
Smith 1

如果我向生产者(agg-table-source-topic)输入相同的输入,那么输出主题应该有聚合并且结果应该是

John 2
Smith 2

我很感谢你的帮助。

标签: apache-kafkaapache-kafka-streams

解决方案


当我使用聚合函数模拟字数统计案例时 [...]

你的设置看起来很复杂。你为什么不做以下事情?

final KTable<String, Long> aggregated = builder.stream("agg-table-source-topic");
  .flatMapValues(value -> Arrays.asList(value.split("\\W+")))
  .groupBy((keyIgnored, word) -> word)
  // Normally, you'd use `count()` here and be done with it.
  // But you mentioned you intentionally want to use `aggregate(...)`.
  .aggregate(
      () -> 0L,
      (aggKey, newValue, aggValue) -> aggValue + 1L,
      Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("aggregate-store").withValueSerde(Serdes.Long()))

aggregated.toStream().to("agg-output-topic", Produced.with(Serdes.String(), Serdes.Long()));

也就是说,与普通的 WordCount 示例相比,您所要做的就是替换:

  .count()

  .aggregate(
      () -> 0L,
      (aggKey, newValue, aggValue) -> aggValue + 1L,
      Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("aggregate-store").withValueSerde(Serdes.Long()))

请注意,上面的示例代码使用Long, not Integer,但您当然可以更改它。


推荐阅读