首页 > 解决方案 > pyspark wordcount 按值排序

问题描述

我正在学习 pyspark,我正在尝试下面的代码。有人可以帮我理解什么问题吗?

>>> pairs=data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b)
>>> pairs.collect()
[(u'one', 1), (u'ball', 4), (u'apple', 4), (u'two', 4), (u'three', 1)]

pairs=data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b).map(lambda a,b: (b,a)).sortByKey()

我正在尝试根据值进行排序,上面的代码给了我错误

19/09/25 08:55:07 WARN TaskSetManager: Lost task 1.0 in stage 36.0 (TID 67, dbsld0107.uhc.com, executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/opt/mapr/tmp/hadoop-mapr/nm-local-dir/usercache/avenkat9/appcache/application_1565204780647_2728325/container_e07_1565204780647_2728325_01_000003/pyspark.zip/pyspark/worker.py", line 177, in main
    process()
  File "/opt/mapr/tmp/hadoop-mapr/nm-local-dir/usercache/avenkat9/appcache/application_1565204780647_2728325/container_e07_1565204780647_2728325_01_000003/pyspark.zip/pyspark/worker.py", line 172, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 2423, in pipeline_func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 2423, in pipeline_func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 2423, in pipeline_func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 346, in func
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 1041, in <lambda>
  File "/opt/mapr/spark/spark-2.2.1/python/pyspark/rdd.py", line 1041, in <genexpr>
TypeError: <lambda>() takes exactly 2 arguments (1 given)

标签: apache-sparkpyspark

解决方案


我认为您正在尝试按 value 排序。尝试这个

data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b).sortBy(lambda a:a[1]).collect()

如果您希望修复代码,请尝试以下操作

data.flatMap(lambda x:x.split(' ')).map(lambda x:(x,1)).reduceByKey(lambda a,b: a+ b).map(lambda a:(a[1],a[0])).sortByKey().collect()

推荐阅读