首页 > 解决方案 > 未找到主题融合 kafka 模式注册表错误代码“:40401,”消息

问题描述

我正在使用 Kafka 融合模式注册表 docker 映像,当我在本地测试它时(在本地安装了 kafka,这可以按预期工作,但是当我尝试将它与远程 Kafka 集群一起使用时,出现错误:

{"error_code":40401,"message":"Subject not found. io.confluent.rest.exceptions.RestNotFoundException: Subject not found.\nio.confluent.rest.exceptions.RestNotFoundException: Subject not found.\n\tat io.confluent.kafka.schemaregistry.rest.exceptions.Errors.subjectNotFoundException(Errors.java:51)\n\tat io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.listVersions(SubjectVersionsResource.java:157)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tatjava.lang.reflect.Method.invoke(Method.java:498)\n\tat

下面是我用来运行 docker 的命令

docker run  --network host -p 8081:8081        -e  SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=first_broker:9092,second_broker:9092,third_broker:9092    -e SCHEMA_REGISTRY_HOST_NAME=0.0.0.0     -e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081     -e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest

我得到的错误堆栈是:

    Producer clientId=producer-1] Updated cluster metadata updateVersion 2 to MetadataCache{cluster=Cluster(id = dIU-fffyfHXRDeVgZA4fud_eBw, nodes = [first_broker:9092 (id: 2 rack: subnret-0ecf514e9ghg94d5197a7), second_broker:9092 (id: 1 rack: subrnet-0befbedzd392e5497137), third_broker:9092 (id: 3 rack: subnret-0rrc00cc1dbd14c0350)], partitions = [Partition(topic = topics, partition = 0, leader = 1, replicas = [1,3,2], isr = [1,3,2], offlineReplicas = [])], controller = first_broker:9092 (id: 3 rack: subnret-0c0rr0cc1dbd14c0350))}
Sending POST with input {"schema":"\"string\""} to http://0.0.0.0:8081/subjects/topicName-value/versions
org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.SocketException: Unexpected end of file from server
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:851)
    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:8

我注意到在远程 Kafka 集群中我创建了 _schemas 主题但是当我使用消费者控制台从该主题中读取数据时_shemas,我得到了以下结果:

{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null
{"keytype":"NOOP","magic":0}-null

任何想法如何解决这个问题。

标签: apache-kafkaavrokafka-producer-apiconfluent-schema-registry

解决方案


SCHEMA_REGISTRY_HOST_NAME应该是可解析的主机名,而不是0.0.0.0

同样,不要http://0.0.0.0:8081在您的生产者代码中使用。

侦听器是绑定地址,但它们也可以省略,只要您转发了端口,然后删除--network host

您可以忽略NOOP来自注册表的消息(它会在启动时吐出其中两个以找到主题的最后)


推荐阅读