首页 > 解决方案 > 如何使用 pyspark 对多数类进行欠采样

问题描述

我尝试像下面的代码一样解决数据,但我还没有用 groupy 和 udf 弄清楚,还发现 udf 不能返回数据帧。

有没有什么方法可以通过 spark 或其他方法来实现这个可以处理不平衡的数据

ratio = 3
def balance_classes(grp):
    picked = grp.loc[grp.editorsSelection == True]
    n = round(picked.shape[0]*ratio)
    if n:        
        try:
            not_picked = grp.loc[grp.editorsSelection == False].sample(n)
        except: # In case, fewer than n comments with `editorsSelection == False`
            not_picked = grp.loc[grp.editorsSelection == False]
        balanced_grp = pd.concat([picked, not_picked])
        return balanced_grp
    else: # If no editor's pick for an article, dicard all comments from that article
        return None 

comments = comments.groupby('articleID').apply(balance_classes).reset_index(drop=True)

标签: pythonapache-spark

解决方案


我通常使用这个逻辑来欠采样:

def resample(base_features,ratio,class_field,base_class):
    pos = base_features.filter(col(class_field)==base_class)
    neg = base_features.filter(col(class_field)!=base_class)
    total_pos = pos.count()
    total_neg = neg.count()
    fraction=float(total_pos*ratio)/float(total_neg)
    sampled = neg.sample(False,fraction)
    return sampled.union(pos)

base_feature 是具有这些功能的 spark 数据框。ratio 是正负之间的期望比率 class_field 是包含类的列的名称,base_class 是类的 id


推荐阅读