首页 > 解决方案 > 线程“主”java.lang.ClassNotFoundException 中的异常:找不到数据源:jdbc

问题描述

这是我来自 IntelliJ 的代码:

package com.dmngaya
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SparkSession

object ReadVertexPage {

def main(args: Array[String]): Unit = {
val conf: SparkConf = new SparkConf().setAppName("ReadVertexPage").setMaster("local")
val sc: SparkContext = new SparkContext(conf)
val spark = SparkSession
  .builder()
  .appName("Spark SQL basic example")
  .getOrCreate()

val jdbcDF1 = spark.read.format("jdbc").options(
  Map(
    "driver" -> "com.tigergraph.jdbc.Driver",
    "url" -> "jdbc:tg:http://127.0.0.1:14240",
    "username" -> "tigergraph",
    "password" -> "tigergraph",
    "graph" -> "gsql_demo", // graph name
    "dbtable" -> "vertex Page", // vertex type
    "limit" -> "10", // number of vertices to retrieve
    "debug" -> "0")).load()

 jdbcDF1.show
   }

}

当我在 spark-shell 中运行它时,它正在运行文件:/opt/spark/bin/spark-shell --jars /home/tigergraph/ecosys/tools/etl/tg-jdbc-driver/tg-jdbc-driver/目标/tg-jdbc-driver-1.2.jar

scala> val jdbcDF1 = spark.read.format("jdbc").options(
 |   Map(
 |     "driver" -> "com.tigergraph.jdbc.Driver",
 |     "url" -> "jdbc:tg:http://127.0.0.1:14240",
 |     "username" -> "tigergraph",
 |     "password" -> "tigergraph",
 |     "graph" -> "gsql_demo", // graph name
 |     "dbtable" -> "vertex Page", // vertex type
 |     "limit" -> "10", // number of vertices to retrieve
 |     "debug" -> "0")).load()
 jdbcDF1: org.apache.spark.sql.DataFrame = [v_id: string, page_id: string]

 scala> jdbcDF1.show
 result:
 +----+--------+                                                                 
|v_id| page_id|
+----+--------+
|   7|       7|
|   5|       5|
|  10|      10|
|1002|    1002|
|   3|       3|
|1000|new page|
|1003|    1003|
|   1|       1|
|   6|       6|
|1001|        |

从 IntelliJ,我有以下错误:

20/11/23 10:43:43 INFO SharedState: 将 hive.metastore.warehouse.dir ('null') 设置为 spark.sql.warehouse.dir ('file:/home/tigergraph/fiverr-2/火花仓库')。20/11/23 10:43:43 INFO SharedState: 仓库路径是 'file:/home/tigergraph/fiverr-2/spark-warehouse'。线程“主”java.lang.ClassNotFoundException 中的异常:找不到数据源:jdbc。请在http://spark.apache.org/third-party-projects.html找到包 $anonfun$lookupDataSource$5(DataSource.scala:653) at scala.util.Try$.apply(Try.scala:213) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$4( DataSource.scala:653) at scala.util.Failure.orElse(Try.scala:224) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:653) ... 还有 5 个20/11/23 10:43:46 INFO SparkContext:从关闭挂钩调用 stop() 20/11/23 10:43:46 INFO SparkUI:在 http://tigergraph-01:4040 20/11 处停止 Spark Web UI /23 10:43:46 信息 MapOutputTrackerMasterEndpoint:MapOutputTrackerMasterEndpoint 停止!20/11/23 10:43:46 信息 MemoryStore:MemoryStore 已清除 20/11/23 10:43:46 信息 BlockManager:BlockManager 已停止 20/11/23 10:43:47 信息 BlockManagerMaster:BlockManagerMaster 已停止 20/11/23 10:43:47 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:OutputCommitCoordinator 已停止!20/11/23 10:43:47 INFO SparkContext:成功停止 SparkContext 20/11/23 10:43:47 INFO ShutdownHookManager:关闭挂钩调用 20/11/23 10:43:47 INFO ShutdownHookManager:删除目录 /tmp/ spark-66dd4dc4-c70b-4836-805b-d68b3183ccbf 进程以退出代码 1 结束

我该如何解决?

标签: scalaapache-sparkintellij-ideajdbctigergraph

解决方案


您应该tg-jdbc-driver-1.2在 pom/sbt.xml 中添加依赖项。


推荐阅读