kubernetes - StatefulSet breaking Kafka on worker reboot (unordered start)
问题描述
In a worker node reboot scenario (1.14.3), does the order of starting stateful sets pods matter, I have a confluent kafka (5.5.1) situation where 1 member start a lot before 0 and a bit ahead of 2, as a result I see a lot of crashes on 0 is there some mechanic here that breaks things? Starting is ordinal and delete is reversed, but what happens when order is broken?
Started: Sun, 02 Aug 2020 00:52:54 +0100 kafka-0
Started: Sun, 02 Aug 2020 00:50:25 +0100 kafka-1
Started: Sun, 02 Aug 2020 00:50:26 +0100 kafka-2
Started: Sun, 02 Aug 2020 00:28:53 +0100 zk-0
Started: Sun, 02 Aug 2020 00:50:29 +0100 zk-1
Started: Sun, 02 Aug 2020 00:50:19 +0100 zk-2
解决方案
推荐阅读
- r - bind_tf_idf()错误:在tapply(n,文档,总和)中:参数必须具有相同的长度
- javascript - React Native 按日期排序数组
- python - 无法通过连接我的前端的 REST API 检索图像和视频
- redis-cache - 如何使用 .net 核心在 RedisCache 的 Google 内存存储上执行键(读、写、删除)
- wordpress - 在 WordPress 中扩展媒体库
- javascript - 在控制台上手动抛出状态码错误
- google-bigquery - bigquery 中的 UNNEST 不起作用,出现“无法访问 ARRAY 类型值的字段”
- javascript - 使用 selenium 从 Sharechat 中抓取视频
- pandas - 如何将多条移动平均线动态添加到 Plotly Candlestick
- c# - System.Security.SecurityException:请求失败的异常