首页 > 解决方案 > 删除任何带有 NULL 的行

问题描述

我有一个小问题。我想删除任何包含“NULL”的行。

这是我的输入文件:

matricule,dateins,cycle,specialite,bourse,sport
0000000001,1999-11-22,Master,IC,Non,Non
0000000002,2014-02-01,Null,IC,Null,Oui
0000000003,2006-09-07,Null,Null,Oui,Oui
0000000004,2008-12-11,Master,IC,Oui,Oui
0000000005,2006-06-07,Master,SI,Non,Oui

我做了很多研究,发现了一个名为 drop(any) 的函数。这基本上会删除任何包含 NULL 值的行。我尝试在下面的代码中使用它,但它不会工作

val x = sc.textFile("/home/amel/one")

val re = x.map(row => {
  val cols = row.split(",")
  val cycle = cols(2)
  val years = cycle match {
    case "License" => "3 years"
    case "Master" => "3 years"
    case "Ingeniorat" => "5 years"
    case "Doctorate" => "3 years"
    case _ => "other"
  }
  (cols(1).split("-")(0) + "," + years + "," + cycle + "," + cols(3), 1)
}).reduceByKey(_ + _)
re.collect.foreach(println)

这是我的代码的当前结果:

(1999,3 years,Master,IC,57)
(2013,NULL,Doctorat,SI,44)
(2013,NULL,Licence,IC,73)
(2009,5 years,Ingeniorat,Null,58)
(2011,3 years,Master,Null,61)
(2003,5 years,Ingeniorat,Null,65)
(2019,NULL,Doctorat,SI,80)

但是,我希望结果是这样的:

(1999, 3 years, Master, IC)

即,应删除任何包含“NULL”的行。

标签: scalaapache-spark

解决方案


Similar but not duplicate question as the following question on SO: Filter spark DataFrame on string contains

You can filter this RDD when you read it in.

val x = sc.textFile("/home/amel/one").filter(!_.toLowerCase.contains("null"))

推荐阅读