首页 > 解决方案 > Spark with Yarn - 卡在 WARN cluster.YarnScheduler:初始作业未接受任何资源

问题描述

我正在尝试学习 Hadoop/Spark,为此我购买了 2x RaspberryPI 4B+(4 核 CPU 和 2gb RAM)。我遵循了很多教程,已经运行了 MapReduce 作业,但是 Spark 每次都卡在这里WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

我正在尝试的命令是spark-submit --class org.apache.spark.examples.SparkPi --master yarn /opt/spark/examples/jars/spark-examples*.jar 10

当我使用 yarn node -list 时,我得到了这个

2021-05-21 18:05:26,548 INFO client.RMProxy: Connecting to ResourceManager at raspberrypi1/192.168.1.101:8032
Total Nodes:1
         Node-Id         Node-State Node-Http-Address   Number-of-Running-Containers
raspberrypi2:35133          RUNNING raspberrypi2:8042                              0

这是作业说明我的工作节点的部分,直到它卡住的部分

2021-05-21 17:41:17,015 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 192.168.1.102
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1621629646204
     final status: UNDEFINED
     tracking URL: http://raspberrypi1:8088/proxy/application_1621629551274_0001/
     user: pi
2021-05-21 17:41:17,019 INFO cluster.YarnClientSchedulerBackend: Application application_1621629551274_0001 has started running.
2021-05-21 17:41:17,052 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41723.
2021-05-21 17:41:17,054 INFO netty.NettyBlockTransferService: Server created on raspberrypi1:41723
2021-05-21 17:41:17,059 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2021-05-21 17:41:17,146 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, raspberrypi1, 41723, None)
2021-05-21 17:41:17,156 INFO storage.BlockManagerMasterEndpoint: Registering block manager raspberrypi1:41723 with 117.0 MB RAM, BlockManagerId(driver, raspberrypi1, 41723, None)
2021-05-21 17:41:17,178 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, raspberrypi1, 41723, None)
2021-05-21 17:41:17,180 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, raspberrypi1, 41723, None)
2021-05-21 17:41:17,844 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> raspberrypi2, PROXY_URI_BASES -> http://raspberrypi2:8088/proxy/application_1621629551274_0001), /proxy/application_1621629551274_0001
2021-05-21 17:41:17,884 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json.
2021-05-21 17:41:17,906 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@18f1712{/metrics/json,null,AVAILABLE,@Spark}
2021-05-21 17:41:17,972 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
2021-05-21 17:41:18,255 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
2021-05-21 17:41:19,157 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
2021-05-21 17:41:19,241 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 10 output partitions
2021-05-21 17:41:19,242 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
2021-05-21 17:41:19,244 INFO scheduler.DAGScheduler: Parents of final stage: List()
2021-05-21 17:41:19,249 INFO scheduler.DAGScheduler: Missing parents: List()
2021-05-21 17:41:19,281 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
2021-05-21 17:41:19,667 WARN util.SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
2021-05-21 17:41:19,706 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.0 KB, free 117.0 MB)
2021-05-21 17:41:19,835 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1381.0 B, free 117.0 MB)
2021-05-21 17:41:19,842 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on raspberrypi1:41723 (size: 1381.0 B, free: 117.0 MB)
2021-05-21 17:41:19,851 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1184
2021-05-21 17:41:19,922 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
2021-05-21 17:41:19,926 INFO cluster.YarnScheduler: Adding task set 0.0 with 10 tasks
2021-05-21 17:41:34,994 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

最后一条消息每 3 秒重复一次

spark job webUI executors 页面只显示我的master(raspberrypi1),时间轴只显示正在添加的驱动程序,然后Pi 作业的reduce 开始(但永远不会结束)。

正如我遵循的上一个教程所建议的那样,我的 IP 是 192.168.1.101(主机名 raspberrypi1)和 192.168.1.102(主机名 raspberrypi2)。

多次更改我的配置文件,增加或减少值但没有运气。目前他们是:

spark-defaults.sh(仅在主服务器上)

spark.master            yarn
spark.driver.memory     512m
spark.yarn.am.memory        512m
spark.executor.memory       512m
spark.executor.cores        2

yarn-site.xml(在两台计算机上)

<configuration>
        <property>
                <name>yarn.acl.enable</name>
                <value>0</value>
        </property>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>raspberrypi1</value>
        </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>0.0.0.0:8088</value>
    </property>
        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>1536</value>
        </property>
        <property>
                <name>yarn.scheduler.maximum-allocation-mb</name>
                <value>1536</value>
        </property>
        <property>
                <name>yarn.scheduler.minimum-allocation-mb</name>
                <value>128</value>
        </property>
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
        <property>
                <name>yarn.nodemanager.vmem-check-enabled</name>
                <value>false</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
</configuration>

mapred-site.xml(在两台计算机上)

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
    <property>
            <name>yarn.app.mapreduce.am.env</name>
            <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
    </property>
    <property>
            <name>mapreduce.map.env</name>
            <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
    </property>
    <property>
            <name>mapreduce.reduce.env</name>
            <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
    </property>
        <property>
                <name>yarn.app.mapreduce.am.resource.mb</name>
                <value>512</value>
        </property>
        <property>
                <name>mapreduce.map.memory.mb</name>
                <value>256</value>
        </property>
        <property>
                <name>mapreduce.reduce.memory.mb</name>
                <value>256</value>
        </property>
</configuration>

.bashrc(在两台计算机上)

export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export SPARK_HOME=/opt/spark
export PATH=$PATH:$SPARK_HOME/bin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native:$LD_LIBRARY_PATH

我真的坚持了一个多星期,非常感谢帮助。谢谢。

标签: apache-sparkhadoophadoop-yarn

解决方案


推荐阅读