首页 > 解决方案 > 从 Pyspark 连接到 SQL Server(仅限 Active Directory 访问)时出错

问题描述

专家们,我正在尝试在我的 pyspark 应用程序中连接到 SQL 服务器上的数据库(只能通过 Active Directory 访问)。我的用户帐户和 AD 组(需要访问 SQL 服务器)都在同一个域中,我可以使用 SQL 服务器管理工​​作室访问数据库。但是,当我尝试使用 pyspark 时,这是一个一致的问题。这是代码的样子 -

$ pyspark2 --driver-class-path sqljdbc42.jar --jars sqljdbc42.jar
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
/data/2/parcels/SPARK2-2.2.0.cloudera4-1.cdh5.13.3.p0.603055/lib/spark2/python/pyspark/context.py:203: UserWarning: Support for Python 2.6 is deprecated as of Spark 2.0.0
  warnings.warn("Support for Python 2.6 is deprecated as of Spark 2.0.0")
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.2.0.cloudera4
      /_/

Using Python version 2.6.6
SparkSession available as 'spark'.
>>> from pyspark.sql import SparkSession
>>> spark=SparkSession.builder.config("spark.driver.extraClassPath","sqljdbc42.jar:sqljdbc_auth.dll").getOrCreate()
>>> properties={'user':'myID','password':'myPWD','driver':'com.microsoft.sqlserver.jdbc.SQLServerDriver'}
>>> urlVar="jdbc:sqlserver://server1.domain1.com:3341;databaseName=DB1;integratedSecurity=true"
>>> df=spark.read.jdbc(url=urlVar,table='schema1.table1', properties=properties).load()
>>> df.show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/2/parcels/SPARK2-2.2.0.cloudera4-1.cdh5.13.3.p0.603055/lib/spark2/python/pyspark/sql/readwriter.py", line 165, in load
    return self._df(self._jreader.load())
  File "/data/2/parcels/SPARK2-2.2.0.cloudera4-1.cdh5.13.3.p0.603055/lib/spark2/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/data/2/parcels/SPARK2-2.2.0.cloudera4-1.cdh5.13.3.p0.603055/lib/spark2/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/data/2/parcels/SPARK2-2.2.0.cloudera4-1.cdh5.13.3.p0.603055/lib/spark2/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o72.load.
: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host server1.domain1.com, port 3341 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
        at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:191)
        at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:242)
        at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2369)
        at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:551)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1963)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1628)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1459)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:773)
        at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:1168)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:61)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:52)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:58)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:114)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:52)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:307)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)

我在上面的代码中传递的凭据属于域。类似的代码适用于有数据库帐户(AD 组之外)的情况。这里有问题的生产服务器不允许为数据库访问创建个人帐户,并且仅支持 AD 身份验证。

使用 SQL Server Management Studio 连接时,我使用以下设置,它连接到 DB 没有任何问题。用户名为 domain1\user1。此处无需传递密码,因为它是 AD 认证。

在此处输入图像描述

我尝试了不同的东西,但最终得到了同样的错误。

标签: sql-serverapache-sparkpysparkapache-spark-sql

解决方案


推荐阅读