首页 > 解决方案 > YarnAllocator 请求的容器比我请求的多

问题描述

YarnAllocator 和 Yarn Resource Manager 表现得如此慷慨,以至于它要求和提供的比我在配置上的要多。我一共要了 72 个容器,它给了 133 个容器。我期望的是 YarnAllocator 只会分配我要求的数量。有人可以解释发生了什么吗?

这是从日志中捕获的请求

18/06/08 06:52:29 INFO yarn.YarnAllocator: Will request 72 executor container(s), each with 4 core(s) and 11264 MB memory (including 3072 MB of overhead)
18/06/08 06:52:29 INFO yarn.YarnAllocator: Submitted 72 unlocalized container requests.
...
18/06/08 06:52:30 INFO yarn.YarnAllocator: Will request 8 executor container(s), each with 4 core(s) and 11264 MB memory (including 3072 MB of overhead)
18/06/08 06:52:30 INFO yarn.YarnAllocator: Submitted 8 unlocalized container requests.
...
18/06/08 06:52:31 INFO yarn.YarnAllocator: Will request 53 executor container(s), each with 4 core(s) and 11264 MB memory (including 3072 MB of overhead)
18/06/08 06:52:32 INFO yarn.YarnAllocator: Submitted 53 unlocalized container requests.

这是我的火花配置:

--driver-memory 4g \
--executor-memory 8g \
--executor-cores 4 \
--num-executors 72 \
--conf spark.yarn.executor.memoryOverhead=3072 \
--conf spark.executor.extraJavaOptions="-XX:+UseG1GC" \
--conf spark.yarn.max.executor.failures=128 \
--conf spark.memory.fraction=0.1 \
--conf spark.rdd.compress=true \
--conf spark.shuffle.compress=true \
--conf spark.shuffle.service.enabled=true \
--conf spark.shuffle.spill.compress=true \
--conf spark.speculation=false \
--conf spark.task.maxFailures=1000 \
--conf spark.sql.codegen.wholeStage=false \
--conf spark.scheduler.listenerbus.eventqueue.size=100000 \
--conf spark.shuffle.service.enabled=false \

标签: apache-sparkhadoop-yarnresourcemanager

解决方案


推荐阅读