首页 > 解决方案 > Spark sql连接两个没有主键的数据帧

问题描述

我有以下 2 个数据框,我想加入这些数据框以创建新的模式数据:

df = sqlContext.createDataFrame([("A011021","15","2020-01-01","2020-12-31","4"),("A011021","15","2020-01-01","2020-12-31","4"),("A011021","15","2020-01-01","2020-12-31","4"),("A011021","15","2020-01-01","2020-12-31","3")], ["rep_id","sales_target","start_date","end_date","st_new"])
df2.createOrReplaceTempView('df')
+--------------+------------+----------+----------+------+
rep_id         |sales_target|start_date|end_date  |st_new|
+--------------+------------+----------+----------+-------
|A011021       |15          |2020-01-01|2020-12-31|4     |
|A011021       |15          |2020-01-01|2020-12-31|4     |
|A011021       |15          |2020-01-01|2020-12-31|4     |
|A011021       |15          |2020-01-01|2020-12-31|3     |
|A011022       |6           |2020-01-01|2020-12-31|3     |
|A011022       |6           |2020-01-01|2020-12-31|3     |
+--------------+------------+----------+----------+-------

df2 = sqlContext.createDataFrame([("A011021","15","2020-01-01","2020-12-31","2020-01-01","2020-03-31"),("A011021","15","2020-01-01","2020-12-31","2020-04-01","2020-06-30"),("A011021","15","2020-01-01","2020-12-31","2020-07-01","2020-09-30"),("A011021","15","2020-01-01","2020-12-31","2020-10-01","2020-12-31")], ["rep_id","sales_target","start_date","end_date","new_sdt","new_edt"])
df2.createOrReplaceTempView('df2')
+--------------+------------+----------+----------+-----------+----------+
rep_id         |sales_target|start_date|end_date  |new_sdt    |new_edt   |
+--------------+------------+----------+----------------------+----------+
|A011021       |15          |2020-01-01|2020-12-31|2020-01-01 |2020-03-31|
|A011021       |15          |2020-01-01|2020-12-31|2020-04-01 |2020-06-30|
|A011021       |15          |2020-01-01|2020-12-31|2020-07-01 |2020-09-30|
|A011021       |15          |2020-01-01|2020-12-31|2020-10-01 |2020-12-31|
|A011022       |6           |2020-01-01|2020-06-30|2020-01-01 |2020-03-31|
|A011022       |6           |2020-01-01|2020-06-30|2020-04-01 |2020-06-30|
+--------------+------------+----------+----------------------+----------+

当我运行查询以加入两个表时,我得到重复的结果,如下所示:

select ds1.*,ds2.st_new from df2 ds2
inner join df1 ds1
on ds2.rep_id=ds1.rep_id
where ds2.rep_id='A011021'

+--------------+------------+----------+----------+------+-----------+----------+
rep_id         |sales_target|start_date|end_date  |st_new|new_sdt    |new_edate |
+--------------+------------+----------+----------+------------------+----------+
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-01-01 |2019-12-31|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-01-01 |2019-12-31|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-01-01 |2019-12-31|
|A011021       |15          |2020-01-01|2020-12-31|3     |2020-01-01 |2020-03-31|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-04-01 |2020-03-31|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-04-01 |2020-03-31|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-04-01 |2020-03-31|
|A011021       |15          |2020-01-01|2020-12-31|3     |2020-04-01 |2020-06-30|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-07-01 |2020-06-30|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-07-01 |2020-06-30|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-07-01 |2020-06-30|
|A011021       |15          |2020-01-01|2020-12-31|3     |2020-07-01 |2020-09-30|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-10-01 |2020-09-30|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-10-01 |2020-09-30|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-10-01 |2020-09-30|
|A011021       |15          |2020-01-01|2020-12-31|3     |2020-10-01 |2020-12-30|
+--------------+------------+----------+----------+------------------+----------+

有没有办法使用 spark_sql 或 pyspark 函数只获取给定 rep_id 的不同 new_sdt、new_edt、季度数据,请帮助。

预期结果是:

select ds1.*,ds2.st_new from df2 ds2
inner join df1 ds1
on ds2.rep_id=ds1.rep_id

+--------------+------------+----------+----------+------+-----------+----------+
rep_id         |sales_target|start_date|end_date  |st_new|new_sdt    |new_edt |
+--------------+------------+----------+----------+------------------+----------+
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-01-01 |2020-03-31|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-04-01 |2020-06-30|
|A011021       |15          |2020-01-01|2020-12-31|4     |2020-07-01 |2020-09-30|
|A011021       |15          |2020-01-01|2020-12-31|3     |2020-10-01 |2020-12-31|
|A011022       |6           |2020-01-01|2020-12-31|3     |2020-01-01 |2020-03-31|
|A011022       |6           |2020-01-01|2020-12-31|3     |2020-04-01 |2020-06-30|
+--------------+------------+----------+----------+------------------+----------+

标签: mysqlpython-3.xpysparkapache-spark-sqlpyspark-dataframes

解决方案


  1. Allot unique id
  2. Do a inner join
  3. Drop the extra columns
import org.apache.spark.sql.functions._

object InnerJoin {

  def main(args: Array[String]): Unit = {
    val spark = Constant.getSparkSess

    import spark.implicits._

    val df1 = List(
      ("A011021", "15", "2020-01-01", "2020-12-31", "4"),
      ("A011021", "15", "2020-01-01", "2020-12-31", "4"),
      ("A011021", "15", "2020-01-01", "2020-12-31", "4"),
      ("A011021", "15", "2020-01-01", "2020-12-31", "3"),
      ("A011022", "6" , "2020-01-01", "2020-12-31", "3"),
      ("A011022", "6" , "2020-01-01", "2020-12-31", "3"))
      .toDF("rep_id","sales_target","start_date","end_date","st_new")
      .withColumn("rowid",monotonically_increasing_id())

    val df2 = List(
      ("A011021","15","2020-01-01","2020-12-31","2020-01-01","2020-03-31"),
      ("A011021","15","2020-01-01","2020-12-31","2020-04-01","2020-06-30"),
      ("A011021","15","2020-01-01","2020-12-31","2020-07-01","2020-09-30"),
      ("A011021","15","2020-01-01","2020-12-31","2020-10-01","2020-12-31"),
      ("A011022","6" ,"2020-01-01","2020-06-30","2020-01-01","2020-03-31"),
      ("A011022","6" ,"2020-01-01","2020-06-30","2020-04-01","2020-06-30"))
      .toDF("rep_id","sales_target","start_date","end_date","new_sdt","new_edt")
      .withColumn("rowid",monotonically_increasing_id())


    df1.as("ds1").join(df2.as("ds2"),
      col("ds1.rowid") === col("ds2.rowid"),
      "inner")
      .orderBy(col("ds1.rep_id"),col("ds1.sales_target"),col("st_new").desc)
      .drop("rowid")
      .show()
  }

}

Desired output

+-------+------------+----------+----------+------+-------+------------+----------+----------+----------+----------+
| rep_id|sales_target|start_date|  end_date|st_new| rep_id|sales_target|start_date|  end_date|   new_sdt|   new_edt|
+-------+------------+----------+----------+------+-------+------------+----------+----------+----------+----------+
|A011021|          15|2020-01-01|2020-12-31|     4|A011021|          15|2020-01-01|2020-12-31|2020-04-01|2020-06-30|
|A011021|          15|2020-01-01|2020-12-31|     4|A011021|          15|2020-01-01|2020-12-31|2020-01-01|2020-03-31|
|A011021|          15|2020-01-01|2020-12-31|     4|A011021|          15|2020-01-01|2020-12-31|2020-07-01|2020-09-30|
|A011021|          15|2020-01-01|2020-12-31|     3|A011021|          15|2020-01-01|2020-12-31|2020-10-01|2020-12-31|
|A011022|           6|2020-01-01|2020-12-31|     3|A011022|           6|2020-01-01|2020-06-30|2020-04-01|2020-06-30|
|A011022|           6|2020-01-01|2020-12-31|     3|A011022|           6|2020-01-01|2020-06-30|2020-01-01|2020-03-31|
+-------+------------+----------+----------+------+-------+------------+----------+----------+----------+----------+

推荐阅读