首页 > 解决方案 > pyspark 问题: : java.io.IOException: No FileSystem for scheme: s3

问题描述

用例:读取 s3 csv 文件并创建数据框

使用的代码:

import boto3 import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() s3 = boto3.client('s3',aws_access_key_id='xxxxxxxxxxxxx',aws_secret_access_key='xxx') cust_Address_SOURCE_PATH = "s3://log-bucket-poc-varun/" read_s3_address_cust_df=spark.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load(Cust_Address_SOURCE_PATH) print(read_s3_address_cust_df.show())

Error : An error occurred while calling o763.load. : java.io.IOException: No FileSystem for scheme: s3 at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:547)

标签: pyspark

解决方案


使用s3a而不是s3从 pyspark 访问您的 S3 文件

s3://log-bucket-poc-varun/将其更改为s3a://log-bucket-poc-varun/


推荐阅读