首页 > 解决方案 > 空指针异常仅在运行 jar 文件时发生 - Scala Spark

问题描述

我有一个大数据分析项目。我使用 Spark 并使用 Scala 编写。

当我使用sbt run它运行项目时,它运行良好并给出了我想要的结果。之后,我使用构建 jar 文件sbt assembly并使用java -jar my.jar. 但是进程停止并给了我一个空指针异常。

谁能解释我为什么会这样?请。

我附上了堆栈跟踪以供参考。

2019-06-15 18:49:56 DEBUG BlockManager:58 - Getting local block broadcast_0
2019-06-15 18:49:56 DEBUG BlockManager:58 - Level for block broadcast_0 is StorageLevel(disk, memory, deserialized, 1 replicas)
2019-06-15 18:49:57 INFO  CodecPool:179 - Got brand-new decompressor [.gz]
2019-06-15 18:49:57 DEBUG TaskMemoryManager:427 - unreleased 8.0 MB memory from org.apache.spark.sql.catalyst.expressions.VariableLengthRowBasedKeyValueBatch@43d7b5b2
2019-06-15 18:49:57 DEBUG TaskMemoryManager:427 - unreleased 256.0 KB memory from org.apache.spark.unsafe.map.BytesToBytesMap@6aee5557
2019-06-15 18:49:57 DEBUG TaskMemoryManager:434 - unreleased page: org.apache.spark.unsafe.memory.MemoryBlock@3517be49 in task 0
2019-06-15 18:49:57 DEBUG TaskMemoryManager:434 - unreleased page: org.apache.spark.unsafe.memory.MemoryBlock@295da0ba in task 0
2019-06-15 18:49:57 ERROR Executor:91 - Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NullPointerException
    at org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:153)
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:102)
    at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(generated.java:568)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:587)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2019-06-15 18:49:57 DEBUG TaskSchedulerImpl:58 - parentName: , name: TaskSet_0.0, runningTasks: 0
2019-06-15 18:49:57 WARN  TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NullPointerException
    at org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:153)
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:102)
    at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(generated.java:568)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:587)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

标签: javascalaapache-sparkjarsbt

解决方案


Spark 应用程序应该使用 spark-submit 启动。看这里

因此,您应该做的是打包您的应用程序(jar)并使用 spark-submit 在 spark 上下文中运行该 jar。

Sbt 要么检测到你的 spark 应用程序(取决于你的 sbt 设置),要么它只是将你的 main 方法作为一个简单的 scala 应用程序运行(同样,取决于你的设置)。

无论如何,这就是我所能提供的信息。请提供更多信息以获得更好的答案。


推荐阅读