首页 > 解决方案 > 错误 TransportResponseHandler:从节点连接时仍有 3 个请求未完成

问题描述

我想使用 RDD.map 做一些操作,它在 spark yarn 中工作。但是,在向其添加for循环时会产生错误。我想知道为什么 以及 如何解决它

当我添加for (walkCount...)时,火花会产生以下错误:

java.io.FileNotFoundException:/home/xxx/usr/hadoop-2.7.3/tmp/nm-local-dir/usercache/xxx/appcache/application_1554174196597_0019/blockmgr-ac0eb809-641a-437a-a2f0-223084771848/1f/temp_shuffle_303f490a -6e1b-46a2-ae98-3e3460218bbf(打开的文件太多)

...

19/04/02 19:41:53 错误 TransportResponseHandler:当来自 node6/ip:40762 的连接关闭时仍有 3 个请求未完成 19/04/02 19:41:53 信息 RetryingBlockFetcher:重试获取(1/3)为 1优秀的区块...

代码如下。它没有for (walkCount...).

def randomWalk(): RDD[Array[Long]]=  {//get a sequence of nodes(Long type) from a multilayer graph    
  var randomWalk = initialWalk.map { case (nodeId, clickNode) =>
    ...
    (nodeId, pathBuffer, layer)//nodeId:Long;pathBuffer:ArrayBuffer[Long];layer:Int
  }.persist(persistLevel)//this part is no problem

  for (walkCount <- 0 until 60) {//without this for loop, it works

    randomWalk  = randomWalk.map { case (nodeId, pathBuffer, layer) =>
      val prevNodeId = pathBuffer(pathBuffer.length - 2)//the last two node
      val currentNodeId = pathBuffer.last//the last node
      (s"$prevNodeId $currentNodeId", (nodeId, pathBuffer, layer))
    }.join(indexedEdges).map { case (edge, ((nodeId, pathBuffer, currentLayer), dstNeighbors)) =>//indexedEdges is RDD[(s"$prevNodeId $currentNodeId", dstNeighbors(currentNodeId's neighbors))]
      try {
        //dstNeighbors:Array[(neighborId:Long, layer:Int, weight:Double, tal:Double)]
        val lastNode = pathBuffer.last
        val nextNode = Graphops.produceNode(dstNeighbors, currentLayer, lastNode)//Array[(nextNodeId:Long, newLayer:Int)], size is 1; choose next node, also consider whether changes layer, this function involves math.random and a constant, Graphops.q
        require(nextNode.length == 1, "nextNode.length != 1") 
        pathBuffer.append(nextNode(0)._1)
        (nodeId, pathBuffer, nextNode(0)._2)
      } catch {
        case e: Exception => throw new RuntimeException(e.getMessage)
      }
    }.persist(persistLevel)
  }//this correspond to for loop
  randomWalk.map(_._2.toArray)
}

标签: scalaapache-spark

解决方案


推荐阅读