首页 > 解决方案 > How to handle socket connections in a docker-swarm environment

问题描述

I am building a webapplication using nodejs as the server and Docker Swarm to handle replication and load balancing.

Right now, I need to handle real-time data updates between clients and the replicated servers, so i thought of using Socket.IO to handle the connections. All the requests pass via an NGINX server that redirect it to the manager node of the swarm, and its him that handles the balancing. Since the topology of the network can change rapidly based on the load of the network, i am reticent of letting NGINX handle the balancing and applying sticky sessions... (maybe am wrong)

For my understanding with this setup, if a client connects to my server, the load balancer of docker will send the request to one of my N replicated servers, and this an only this server will know that the client connected. So, its possible that if some traditional HTTP-Request updates my data on another replica, the information will not be sent because of the lack of existence of this connection in the given server.

Is there a way of handling situations like this? I thought of including a Message queue between servers to send the data to all of them and then the one containing the connection will send the data, but is that the recommended way of doing it?

Thank you very much

标签: node.jsdockersocket.iodocker-swarm

解决方案


自提出问题以来,我进行了进一步调查。我会发布我发现的内容,以防它对遇到类似问题的人有所帮助。

我发现的一个选项是使用 MessageQueue 或类似的东西将消息广播到所有副本,然后每个副本只过滤可以发送的消息,因为副本本身知道该副本中可用的 TCP 连接。

但我认为这会给副本带来过大的压力,因为它们都在接收所有消息,因此一种解决方案是创建一个队列或服务,将给定连接的 id 链接到副本,并转发消息只对那些感兴趣的复制品。我认为可以很容易地通过主题或为每个 tcp 连接以某个 id 作为标识符创建一个队列,然后推送到相应的队列。

如果有人看到任何问题或想添加一些东西,将不胜感激!


推荐阅读