docker - 如何在 Spark 中设置 FTP 被动模式?...从 FTP 服务器读取文件
问题描述
我正在像这样从FTP server
rddspark
读取文件
val rdd = spark.sparkContext.textFile("ftp://anonymous:pwd@<hostname>/data.gz")
rdd.count
...
当我从本地机器(Mac)运行 spark 应用程序时,这实际上有效,但是当我尝试从docker 容器(在 Mac 中运行)运行相同的应用程序时,我收到以下异常,
Exception in thread "main" org.apache.commons.net.ftp.FTPConnectionClosedException: Connection closed without indication.
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:313)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:552)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:601)
at org.apache.commons.net.ftp.FTP.quit(FTP.java:809)
at org.apache.commons.net.ftp.FTPClient.logout(FTPClient.java:979)
at org.apache.hadoop.fs.ftp.FTPFileSystem.disconnect(FTPFileSystem.java:168)
at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:415)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1676)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:259)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:205)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
at org.apache.spark.MapOutputTrackerMaster.getPreferredLocationsForShuffle(MapOutputTracker.scala:626)
at org.apache.spark.rdd.ShuffledRDD.getPreferredLocations(ShuffledRDD.scala:99)
at org.apache.spark.rdd.RDD.$anonfun$preferredLocations$2(RDD.scala:300)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:300)
at org.apache.spark.scheduler.DAGScheduler.getPreferredLocsInternal(DAGScheduler.scala:2098)
at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:2072)
at org.apache.spark.SparkContext.getPreferredLocs(SparkContext.scala:1794)
at org.apache.spark.rdd.DefaultPartitionCoalescer.currPrefLocs(CoalescedRDD.scala:180)
at org.apache.spark.rdd.DefaultPartitionCoalescer$PartitionLocations.$anonfun$getAllPrefLocs$1(CoalescedRDD.scala:198)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
at org.apache.spark.rdd.DefaultPartitionCoalescer$PartitionLocations.getAllPrefLocs(CoalescedRDD.scala:197)
at org.apache.spark.rdd.DefaultPartitionCoalescer$PartitionLocations.<init>(CoalescedRDD.scala:190)
at org.apache.spark.rdd.DefaultPartitionCoalescer.coalesce(CoalescedRDD.scala:391)
at org.apache.spark.rdd.CoalescedRDD.getPartitions(CoalescedRDD.scala:90)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2158)
at org.apache.spark.rdd.RDD.count(RDD.scala:1227)
at com.mypackage.Myapp$.parseData(Myapp.scala:76)
在容器中,即使是ftp
命令行实用程序也有同样的问题,但通过passive
在 CLI 中设置模式发现ftp
,我能够成功地将文件从 FTP 服务器传输到容器,
ftp <host>
...
ftp> passive
Passive mode on.
ftp> get data.gz
227 Entering Passive Mode ...
226 Transfer complete
20676672 bytes received in 25.53 secs (790.9552 kB/s)
所以我的问题是......如何设置passive mode
属性?......在使用 Spark 读取文件时param.spark.sparkContext.textFile("ftp://anonymous:pwd@<hostname>/data.gz")
解决方案
我没有使用 Spark 的经验,所以我不知道它是如何与 Hadoop 结合的。但在 Hadoop 中,您可以通过设置fs.ftp.data.connection.mode
配置选项来设置 FTP 被动模式:
fs.ftp.data.connection.mode=PASSIVE_LOCAL_DATA_CONNECTION_MODE
您至少需要 Hadoop 2.9:https ://issues.apache.org/jira/browse/HADOOP-13953
推荐阅读
- python - 未使用的字节应该在左边还是右边
- android - 使用 Chaquopy 将 Python 中的 PyObject 转换为 Android 中的二维数组
- typescript - 为什么 TypeScript 的 IterableIterator<> 和 Generator<> 泛型略有不同?
- pytorch - pytorch 中嵌入的加权求和
- c++ - 仅调用某些模板类成员函数有效
- blazor - 在 Blazor 服务器端使用 CancellationTokenSource 实现去抖动
- reactjs - React-Select:保持编辑模式() 选择后(输入)
- autodesk-forge - 场景更新()与无效()
- c# - 什么是 C# 中的 SafeFileHandle,我应该什么时候使用它?
- git - 无法推送到 GitHub 上的远程存储库