首页 > 解决方案 > 如何从scala中的文本文件中提取每个单词

问题描述

我对 Scala 非常陌生。我有一个文本文件,其中只有一行文件单词由分号 (;) 分隔。我想提取每个单词,删除空格,将所有转换为小写并根据每个单词的索引调用它们。以下是我的处理方式:

newListUpper2.txt contains (Bed;  chairs;spoon; CARPET;curtains )
val file = sc.textFile("myfile.txt")
val lower = file.map(x=>x.toLowerCase)
val result = lower.flatMap(x=>x.trim.split(";"))
result.collect.foreach(println)

以下是我执行代码时的 REPL 副本

    scala> val file = sc.textFile("newListUpper2.txt")
    file: org.apache.spark.rdd.RDD[String] = newListUpper2.txt MapPartitionsRDD[5] at textFile at 
    <console>:24
    scala> val lower = file.map(x=>x.toLowerCase)
    lower: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[6] at map at <console>:26
    scala> val result = lower.flatMap(x=>x.trim.split(";"))
    result: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[7] at flatMap at <console>:28
    scala> result.collect.foreach(println)
bed                                                                          
 chairs
spoon
 carpet
curtains
scala> result(0)
<console>:31: error: org.apache.spark.rdd.RDD[String] does not take parameters
       result(0)

结果不会被修剪,然后将索引作为参数传递以获取该索引处的单词会产生错误。如果我将每个单词的索引作为参数传递,我的预期结果应该如下所述

result(0)= bed
result(1) = chairs
result(2) = spoon
result(3) = carpet
result(4) = curtains

我究竟做错了什么?。

标签: scalaapache-sparkindexingtext-files

解决方案


newListUpper2.txt contains (Bed;  chairs;spoon; CARPET;curtains )
val file = sc.textFile("myfile.txt")
val lower = file.map(x=>x.toLowerCase)
val result = lower.flatMap(x=>x.trim.split(";")) // x = `bed;  chairs;spoon; carpet;curtains` , x.trim does not work. trim func effective for head and tail only
result.collect.foreach(println)

试试看:

val result = lower.flatMap(x=>x.split(";").map(x=>x.trim))

推荐阅读