kubernetes - Kubernetes 自动扩缩器 - NotTriggerScaleUp' pod 没有触发扩容(如果添加了新节点,它将不适合)
问题描述
我想为每个节点运行一个“作业”,一次在一个节点上运行一个 pod。
- 我已经安排了一堆工作
- 我现在有一大堆待处理的 pod
- 我希望这些待处理的 pod 现在触发节点扩展事件(不会发生)
非常喜欢这个问题(我自己做的):Kubernetes 报告“pod 没有触发扩展(如果添加新节点,它不适合)”,即使它会?
但是在这种情况下,它确实应该适合新节点。
我如何诊断为什么 Kubernetes 确定节点扩展事件是不可能的?
我的工作 yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: example-job-${job_id}
labels:
job-in-progress: job-in-progress-yes
spec:
template:
metadata:
name: example-job-${job_id}
spec:
# this bit ensures a job/container does not get scheduled along side another - 'anti' affinity
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: job-in-progress
operator: NotIn
values:
- job-in-progress-yes
containers:
- name: buster-slim
image: debian:buster-slim
command: ["bash"]
args: ["-c", "sleep 60; echo ${echo_param}"]
restartPolicy: Never
自动缩放器日志:
I0920 19:27:58.190751 1 static_autoscaler.go:128] Starting main loop
I0920 19:27:58.261972 1 auto_scaling_groups.go:320] Regenerating instance to ASG map for ASGs: []
I0920 19:27:58.262003 1 aws_manager.go:152] Refreshed ASG list, next refresh after 2019-09-20 19:28:08.261998185 +0000 UTC m=+302.102284346
I0920 19:27:58.262092 1 static_autoscaler.go:261] Filtering out schedulables
I0920 19:27:58.264212 1 static_autoscaler.go:271] No schedulable pods
I0920 19:27:58.264246 1 scale_up.go:262] Pod default/example-job-21-npv6p is unschedulable
I0920 19:27:58.264252 1 scale_up.go:262] Pod default/example-job-28-zg4f8 is unschedulable
I0920 19:27:58.264258 1 scale_up.go:262] Pod default/example-job-24-fx9rd is unschedulable
I0920 19:27:58.264263 1 scale_up.go:262] Pod default/example-job-6-7mvqs is unschedulable
I0920 19:27:58.264268 1 scale_up.go:262] Pod default/example-job-20-splpq is unschedulable
I0920 19:27:58.264273 1 scale_up.go:262] Pod default/example-job-25-g5mdg is unschedulable
I0920 19:27:58.264279 1 scale_up.go:262] Pod default/example-job-16-wtnw4 is unschedulable
I0920 19:27:58.264284 1 scale_up.go:262] Pod default/example-job-7-g89cp is unschedulable
I0920 19:27:58.264289 1 scale_up.go:262] Pod default/example-job-8-mglhh is unschedulable
I0920 19:27:58.264323 1 scale_up.go:304] Upcoming 0 nodes
I0920 19:27:58.264370 1 scale_up.go:420] No expansion options
I0920 19:27:58.264511 1 static_autoscaler.go:333] Calculating unneeded nodes
I0920 19:27:58.264533 1 utils.go:474] Skipping ip-10-0-1-118.us-west-2.compute.internal - no node group config
I0920 19:27:58.264542 1 utils.go:474] Skipping ip-10-0-0-65.us-west-2.compute.internal - no node group config
I0920 19:27:58.265063 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-25-g5mdg", UID:"d2e0e48c-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7256", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265090 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-8-mglhh", UID:"c7d3ce78-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7267", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265101 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-6-7mvqs", UID:"c6a5b0e4-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7273", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265110 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-20-splpq", UID:"cfeb9521-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7259", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265363 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-21-npv6p", UID:"d084c067-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7275", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265384 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-16-wtnw4", UID:"ccbe48e0-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7265", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265490 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-28-zg4f8", UID:"d4afc868-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7269", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265515 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-24-fx9rd", UID:"d24975e5-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7271", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
I0920 19:27:58.265685 1 static_autoscaler.go:360] Scale down status: unneededOnly=true lastScaleUpTime=2019-09-20 19:23:23.822104264 +0000 UTC m=+17.662390361 lastScaleDownDeleteTime=2019-09-20 19:23:23.822105556 +0000 UTC m=+17.662391653 lastScaleDownFailTime=2019-09-20 19:23:23.822106849 +0000 UTC m=+17.662392943 scaleDownForbidden=false isDeleteInProgress=false
I0920 19:27:58.265910 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-7-g89cp", UID:"c73cfaea-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7263", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added):
解决方案
我在自动缩放器上定义了错误的参数。
我不得不修改node-group-auto-discovery
和nodes
参数。
- ./cluster-autoscaler
- --cloud-provider=aws
- --namespace=default
- --scan-interval=25s
- --scale-down-unneeded-time=30s
- --nodes=1:20:terraform-eks-demo20190922161659090500000007--terraform-eks-demo20190922161700651000000008
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/example-job-runner
- --logtostderr=true
- --stderrthreshold=info
- --v=4
安装集群自动缩放器时,仅应用示例配置是不够的,例如:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
如用户指南中所述,该配置在值 for 中有您的 eks 集群名称的占位符,node-group-auto-discovery
您必须在应用之前替换它,或者在部署之后更新它。
推荐阅读
- python - 什么激活函数适用于 Keras 中回归神经网络的输入范围 (0,1) 和输出范围 (-∞,∞)
- c# - xamarin 形式中是否有类似于 Unity void Update() 的东西?
- r - 在基础 R、ggplot2 或 levelplot 中绘制带有栅格的多边形
- php - 以 HTML 格式发送 Shell 脚本
- r - 美学必须是长度 1
- php - 带有 EFS 和 docker 容器的 ec2 实例上的“配额超出 (122)”错误
- python - 以小数形式返回量子位状态的机会
- c# - Mathf.PingPong 从 1 到 0
- amazon-web-services - 如何使用 AWS Gateway 和 S3 代理上传大文件
- c# - C# 使用 HTTPClient 来“导航”一个网站