首页 > 解决方案 > 在python中重用聚类算法

问题描述

我已经建立了一个聚类模型,可以很好地分割数据。我使用了一个两步过程(KMeans 然后 Hierarchical)来避免我直接尝试 Hierarchical 时发生的内存问题(参见参考https://www.dummies.com/programming/big-data/data-science/data-科学执行分层聚类与 python/)。

我的问题与现在如何利用这个过程来为新信息打分有关。我试图保持我的代码结构化,我想“导出”和“导入”相关代码,但我不知道如何导出这两个模型。这是我的代码:

data_scaled = normalize(col_final_df)
data_scaled = pd.DataFrame(data_scaled, columns=col_final_df.columns)

clustering = KMeans(n_clusters=km_seg, n_init=10,
                    random_state=1)
clustering.fit(data_scaled)

post_clust_centres = clustering.cluster_centers_
post_clust_data_mapping = {case: cluster for case, cluster in enumerate(clustering.labels_)}

print('KMeans analysis complete.  Composing hierarchical segmentation of KMeans presently...')

Hclustering = AgglomerativeClustering(n_clusters=29, affinity="cosine", linkage ="complete")
Hclustering.fit(post_clust_centres)

print('Hierarchical segmentation complete.  Composing dendrogram...')

plt.title('Hierarchical Clustering Dendrogram')
plot_dendrogram(Hclustering, labels=Hclustering.labels_)
plt.show()

H_mapping = {case: cluster for case,
                               cluster in enumerate(Hclustering.labels_)}
final_mapping = {case: H_mapping[post_clust_data_mapping[case]]
                 for case in post_clust_data_mapping}

标签: pythonhierarchical-clustering

解决方案


所以酸洗变得很容易,因为我可以保存整个对象并根据需要在新函数中重新导入它。意识到它会使用太多的 i/o,我会确保我只做一次。

为了腌制,我在集群算法的末尾添加了以下代码。


    with open(Config.PATH + '/kmeans.pickle', 'wb') as handle:
        pickle.dump(clustering, handle, protocol=pickle.HIGHEST_PROTOCOL)

    with open(Config.PATH + '/hclust.pickle', 'wb') as handle:
        pickle.dump(Hclustering, handle, protocol=pickle.HIGHEST_PROTOCOL)

然后这是我用来导出数据向量段的评分代码:


def score_data(data):
    with open(Config.PATH + "/kmeans.pickle", 'rb') as handle:
        clustering = pickle.load(handle)

    with open(Config.PATH + "/hclust.pickle", 'rb') as handle:
        Hclustering = pickle.load(handle)

    data_scaled = normalize(data)
    data_scaled = pd.DataFrame(data_scaled, columns=data.columns)

    clustering.labels_ = clustering.predict(data_scaled)
    post_clust_data_mapping = {case: cluster for case, cluster in enumerate(clustering.labels_)}

    H_mapping = {case: cluster for case,
                                   cluster in enumerate(Hclustering.labels_)}
    final_mapping = {case: H_mapping[post_clust_data_mapping[case]]
                     for case in post_clust_data_mapping}

    final_mapping_ls = list(final_mapping.values())

    return [x + 1 for x in final_mapping_ls]

推荐阅读