首页 > 解决方案 > sklearn LatentDirichletAllocation 对新语料库的主题推断

问题描述

我一直在使用 sklearn.decomposition.LatentDirichletAllocation 模块来探索文档语料库。经过多次训练和调整模型的迭代(即添加停用词和同义词,改变主题的数量),我对提炼的主题相当满意和熟悉。作为下一步,我想将经过训练的模型应用于新的语料库。

是否可以将拟合模型应用于一组新文档以确定主题分布。

我知道这在 gensim 库中是可能的,您可以在其中训练模型:

from gensim.test.utils import common_texts
from gensim.corpora.dictionary import Dictionary

# Create a corpus from a list of texts
common_dictionary = Dictionary(common_texts)
common_corpus = [common_dictionary.doc2bow(text) for text in common_texts]

lda = LdaModel(common_corpus, num_topics=10)

随后将训练好的模型应用到新的语料库中:

Topic_distribtutions = lda[unseen_doc]

来自:https ://radimrehurek.com/gensim/models/ldamodel.html

如何使用 LDA 的 scikit-learn 应用程序做到这一点?

标签: pythonscikit-learnldatopic-modeling

解决方案


transform这样做吗?

>>> from sklearn.feature_extraction.text import CountVectorizer
>>> from sklearn.decomposition import LatentDirichletAllocation
>>> from sklearn.datasets import fetch_20newsgroups
>>> 
>>> n_samples = 2000
>>> n_features = 1000
>>> n_components = 10
>>> 
>>> dataset = fetch_20newsgroups(shuffle=True, random_state=1,
...                              remove=('headers', 'footers', 'quotes'))
>>> data_samples = dataset.data[:n_samples]
>>> 
>>> tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
...                                 max_features=n_features,
...                                 stop_words='english')
>>> tf = tf_vectorizer.fit_transform(data_samples)
>>> 
>>> lda = LatentDirichletAllocation(n_components=n_components, max_iter=5,
...                                 learning_method='online',
...                                 learning_offset=50.,
...                                 random_state=0)
>>> 
>>> lda.fit(tf)
LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,
             evaluate_every=-1, learning_decay=0.7,
             learning_method='online', learning_offset=50.0,
             max_doc_update_iter=100, max_iter=5, mean_change_tol=0.001,
             n_components=10, n_jobs=1, n_topics=None, perp_tol=0.1,
             random_state=0, topic_word_prior=None,
             total_samples=1000000.0, verbose=0)
>>> 
>>> print(lda.transform(tf_vectorizer.transform(dataset.data[-3:])))
[[0.0142868  0.63695359 0.01428674 0.01428686 0.01428606 0.01429304
  0.014286   0.24874298 0.01429136 0.01428656]
 [0.01111385 0.45234109 0.01111409 0.45875254 0.01111215 0.01111384
  0.01111214 0.01111282 0.01111441 0.01111307]
 [0.001786   0.68840635 0.00178639 0.00178615 0.00178625 0.00178627
  0.00178587 0.00178627 0.29730378 0.00178667]]

推荐阅读