首页 > 解决方案 > 运行使用 nohup 编写在文件中的 spark 结构化流应用程序 scala 代码

问题描述

我有一个 spark 结构化流 scala 代码,编写为在批处理模式下运行。我正在尝试使用

nohup spark2-shell -i /home/sandeep/spark_test.scala --master yarn --deploy-mode client

这是 spark_test.scala 文件

import org.apache.spark.sql._
import org.apache.spark.sql.types.StructType
import org.apache.spark.SparkConf

val data_schema1 = new StructType().add("a","string").add("b","string").add("c","string")
val data_schema2 = new StructType().add("d","string").add("e","string").add("f","string")

val data1 = spark.readStream.option("sep", ",").schema(data_schema1).csv("/tmp/data1/")
val data2 = spark.readStream.option("sep", ",").schema(data_schema2).csv("/tmp/data2/")

data1.createOrReplaceTempView("sample_data1")
data2.createOrReplaceTempView("sample_data2")

val df = sql("select sd1.a,sd1.b,sd2.e,sd2.f from sample_data1 sd1,sample_Data2 sd2 ON sd1.a = sd2.d")

df.writeStream.format("csv").option("format", "append").option("path", "/tmp/output").option("checkpointLocation", "/tmp/output_cp").outputMode("append").start()

即使终端关闭,我也需要应用程序在后台运行。这是一个非常小的应用程序,我不希望它使用 spark submit 提交。代码在没有 nohup 的情况下运行时正在运行文件,但是当我使用 nohup 时出现以下错误。

java.io.IOException: Bad file descriptor
        at java.io.FileInputStream.readBytes(Native Method)
        at java.io.FileInputStream.read(FileInputStream.java:229)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:229)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:246)
        at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
        at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
        at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
        at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
        at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
        at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
        at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
        at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
        at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
        at mypackage.MyXmlParser.parseFile(MyXmlParser.java:397)
        at mypackage.MyXmlParser.access$500(MyXmlParser.java:51)
        at mypackage.MyXmlParser$1.call(MyXmlParser.java:337)
        at mypackage.MyXmlParser$1.call(MyXmlParser.java:328)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:284)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:665)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:690)
        at java.lang.Thread.run(Thread.java:799)

标签: scalaapache-sparkspark-structured-streamingnohup

解决方案


&在你的末尾添加nohup

"&" symbol在命令末尾指示 bash 在后台运行 nohup mycommand。

nohup spark2-shell -i /home/sandeep/spark_test.scala --master yarn --deploy-mode client &

有关 nohup 命令的更多详细信息,请参阅此链接。


推荐阅读