首页 > 解决方案 > 如何简化文本含义相同但不准确的大数据集的文本比较 - 去重文本数据

问题描述

我有大约 180 万条记录的文本数据集(不同的菜单项,如巧克力、蛋糕、可乐等),属于 6 个不同类别(类别 A、B、C、D、E、F)。其中一个类别有大约 70 万条记录。大多数菜单项都混合在它们不属于的多个类别中,例如:蛋糕属于类别“A”,但也可以在类别“B”和“C”中找到。

我想识别那些错误分类的项目并向工作人员报告,但挑战是项目名称并不总是正确的,因为它完全是人工输入的文本。例如:巧克力可能会更新为 hot chclt、sweet choklate、chocolat 等。也可以是巧克力蛋糕等物品;)

所以为了解决这个问题,我尝试了一种简单的方法,使用余弦相似度来比较类别并识别那些异常,但由于我将每个项目与 180 万条记录进行比较,这需要很长时间(示例代码如下所示)。谁能提出一个更好的方法来处理这个问题?

#Function
from nltk.corpus import stopwords 
from nltk.tokenize import word_tokenize 

def cos_similarity(a,b):
    X =a
    Y =b

    # tokenization 
    X_list = word_tokenize(X)  
    Y_list = word_tokenize(Y) 

    # sw contains the list of stopwords 
    sw = stopwords.words('english')  
    l1 =[];l2 =[] 

    # remove stop words from the string 
    X_set = {w for w in X_list if not w in sw}  
    Y_set = {w for w in Y_list if not w in sw} 

    # form a set containing keywords of both strings  
    rvector = X_set.union(Y_set)  
    for w in rvector: 
        if w in X_set: l1.append(1) # create a vector 
        else: l1.append(0) 
        if w in Y_set: l2.append(1) 
        else: l2.append(0) 
    c = 0

    # cosine formula  
    for i in range(len(rvector)): 
            c+= l1[i]*l2[i] 
    if float((sum(l1)*sum(l2))**0.5)>0:
        cosine = c / float((sum(l1)*sum(l2))**0.5) 
    else:
        cosine = 0
    return cosine

#Base code
cos_sim_list = []
for i in category_B.index:
    ln_cosdegree = 0
    ln_degsem = []
    for j in category_A.index:
        ln_j = str(category_A['item_name'][j])
        ln_i = str(category_B['item_name'][i])
        degreeOfSimilarity = cos_similarity(ln_j,ln_i)
        if degreeOfSimilarity>0.5:
            cos_sim_list.append([ln_j,ln_i,degreeOfSimilarity])

考虑文本已经被清理

标签: python-3.xmachine-learningnlpduplicatescosine-similarity

解决方案


我使用 KNeighbor 和余弦相似度来解决这种情况。尽管我多次运行代码以逐类比较;由于类别数量较少,它仍然有效。如果有更好的解决方案,请建议我

在此处输入图像描述

在此处输入图像描述

cat_A_clean = category_A['item_name'].unique()

print('Vecorizing the data - this could take a few minutes for large datasets...')
vectorizer = TfidfVectorizer(min_df=1, analyzer=ngrams, lowercase=False)
tfidf = vectorizer.fit_transform(cat_A_clean)
print('Vecorizing completed...')

from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=1, n_jobs=-1).fit(tfidf)

unique_B = set(category_B['item_name'].values) 

def getNearestN(query):
    queryTFIDF_ = vectorizer.transform(query)
    distances, indices = nbrs.kneighbors(queryTFIDF_)
    return distances, indices

import time
t1 = time.time()
print('getting nearest n...')
distances, indices = getNearestN(unique_B)
t = time.time()-t1
print("COMPLETED IN:", t)

unique_B = list(unique_B) 
print('finding matches...')
matches = []
for i,j in enumerate(indices):
    temp = [round(distances[i][0],2), cat_A_clean['item_name'].values[j],unique_B[i]]
    matches.append(temp)

print('Building data frame...')  
matches = pd.DataFrame(matches, columns=['Match confidence (lower is better)','ITEM_A','ITEM_B'])
print('Done') 

def clean_string(text):
        text = str(text)
        text = text.lower()
        return(text)
def cosine_sim_vectors(vec1,vec2):
    vec1 = vec1.reshape(1,-1)
    vec2 = vec2.reshape(1,-1)
    return cosine_similarity(vec1,vec2)[0][0]

def cos_similarity(sentences):
    cleaned = list(map(clean_string,sentences))
    print(cleaned)
    vectorizer = CountVectorizer().fit_transform(cleaned)
    vectors = vectorizer.toarray()
    print(vectors) 
    return(cosine_sim_vectors(vectors[0],vectors[1]))

cos_sim_list =[]
for ind in matches.index:
    a = matches['Match confidence (lower is better)'][ind]
    b = matches['ITEM_A'][ind]
    c = matches['ITEM_B'][ind]
    degreeOfSimilarity = cos_similarity([b,c])
    cos_sim_list.append([a,b,c,degreeOfSimilarity])

推荐阅读