首页 > 解决方案 > 用于 s3 身份验证的临时 AWS 令牌的 PySpark 问题

问题描述

我已经设置了本地 PySpark,但每次尝试使用 s3a 协议读取文件 s3 时,它都会返回 403 AccessDenied 错误。我尝试连接的帐户仅支持 AWS 假设角色,它为我提供了临时 Access_key、Secret_key 和 session_token

我正在使用 spark 2.4.4、Hadoop 2.7.3 和 aws-java-sdk-1.7.4 jar 文件。我知道问题不在于我的安全令牌,因为我可以在 boto3 中使用相同的凭据来查询同一个存储桶。我正在按如下方式设置我的 Spark 会话:

spark.sparkContext._conf.setAll([
[('fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'), 
('fs.s3a.aws.credentials.provider','org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider'),
("fs.s3a.endpoint", "s3-ap-southeast-2.amazonaws.com"),
('fs.s3a.access.key', "..."),
('fs.s3a.secret.key', "..."),
('fs.s3a.session.token', "...")])
])

spark_01 = spark.builder.config(conf=conf).appName('s3connection').getOrCreate()

df = spark_01.read.load('s3a://<some bucket>')

我得到的错误是这样的:

com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: ... , AWS Error Code

更新:完整的错误堆栈:

19/10/08 16:37:17 WARN FileStreamSink: Error while looking for metadata directory.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 166, in load
    return self._df(self._jreader.load(path))
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o47.load.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: DFF18E66D647F534, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: ye5NgB5wRhmHpn37tghQ0EuO9K6vPDE/1+Y6m3Y5sGqxD9iFOktFUjdqzn6hd/aHoakEXmafA9o=
        at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
        at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.immutable.List.flatMap(List.scala:355)
        at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
        at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)```

标签: python-3.xapache-sparkamazon-s3pyspark

解决方案


为了解决这个问题,我们需要做以下两件事。(我发现你已经在代码中做了第二件事,所以只需要第一件事。)

  1. 仅使用 hadoop-aws-2.8.5.jar 而不是 aws-java-sdk-1.7.4.jar 和 hadoop-aws-2.7.7.jar。(请参阅https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html中的入门部分)
  2. 如下设置 fs.s3a.aws.credentials.provider。对于您的代码, ('fs.s3a.aws.credentials.provider', 'org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider') 这使您能够使用令牌密钥。使用此设置,您可以在显示或使用系统环境变量时在代码中提供密钥,例如 AWS_ACCESS_KEY_ID、AWS_SECRET_ACCESS_KEY 和 AWS_SESSION_TOKEN。

作为参考,此设置 ('fs.s3a.aws.credentials.provider', 'com.amazonaws.auth.DefaultAWSCredentialsProviderChain') 对于从 ~/.aws/credentials 加载凭据密钥也很有用,而无需在源代码中进行设置. (见,http://wrschneider.github.io/2019/02/02/spark-credentials-file.html


推荐阅读