首页 > 解决方案 > 消费 Kafka 主题后发送 HTTP 响应

问题描述

我目前正在编写一个包含大量微服务的 Web 应用程序。我目前正在探索如何在所有这些服务之间正确通信,我决定坚持使用消息总线,或者更具体地说是 Apache Kafka。

但是,我有几个问题我不确定如何从概念上解决。我使用 API 网关服务作为应用程序的主要入口。它充当将操作转发到适用的微服务的主要代理。考虑以下场景:

  1. 用户向 API 网关发送一个 POST 请求,其中包含一些信息。
  2. 网关生成一条新消息并将其发布到 Kafka 主题。
  3. 订阅的微服务获取主题中的消息并处理数据。

那么,我现在应该如何从网关响应客户端?如果我需要来自该微服务的一些数据怎么办?感觉 HTTP 请求可能会超时。我应该在客户端和 API 网关之间坚持使用 websockets 吗?

而且,如果客户端发送一个 GET 请求来获取一些数据,我应该如何使用 Kafka 来解决这个问题?

谢谢。

标签: javascriptnode.jsapache-kafkamicroservicesbackend

解决方案


Let's say you're going to create an order. This is how it should work:

  1. Traditionally we used to have an auto-increment field or a sequence in the RDBMS table to create an order id. However, this means order id is not generated until we save the order in DB. Now, when writing data in Kafka, we're not immediately writing to the DB and Kafka cannot generate order id. Hence you need to use some scalable id generation utility like Twitter Snowflake or something with the similar architecture so that you can generate an order id even before writing the order in Kafka

  2. Once you have the order id, write a single event message on Kafka topic atomically (all-or-nothing). Once this is successfully done, you can send back a success response to the client. Do not write to multiple topics at this stage as you'll lose atomicity by writing to multiple topics. You can always have multiple consumer groups that write the event to multiple other topics. One consumer group should write the data in some persistent DB for querying

  3. You now need to address the read-your-own-write i.e. immediately after receiving success response the user would want to see the order. But your DB is probably not yet updated with the order data. To acheive this, write the order data to a distributed cache like Redis or Memcached immediately after writing the order data to Kafka and before returning the success response. When the user reads the order, the cached data is returned

  4. Now you need to keep the cache updated with the latest order status. That you can always do with a Kafka consumer reading the order status from a Kafka topic

  5. To ensure that you don't need to keep all orders in cache memory. You can evict data based on LRU. If while reading an order, the data is not on cache, it will be read from the DB and written to the cache for future requests

  6. Finally, if you want to ensure that the ordered item is reserved for the order so that no one else can take, like booking a flight seat, or the last copy of a book, you need a consensus algorithm. You can use Apache Zookeeper for that and create a distribured lock on the item


推荐阅读