首页 > 解决方案 > scikit-learn:FeatureUnion 包含手工制作的功能

问题描述

我正在对文本数据执行多标签分类。我希望使用tfidf类似于此处使用FeatureUnion示例的组合功能和自定义语言功能。

我已经生成了自定义语言特征,它们采用字典的形式,其中键代表标签,(列表)值代表特征。

custom_features_dict = {'contact':['contact details', 'e-mail'], 
                       'demographic':['gender', 'age', 'birth'],
                       'location':['location', 'geo']}

训练数据结构如下:

text                                            contact  demographic  location
---                                              ---      ---          ---
'provide us with your date of birth and e-mail'  1        1            0
'contact details and location will be stored'    1        0            1
'date of birth should be before 2004'            0        1            0

如何将上述dict内容纳入FeatureUnion?我的理解是,应该调用一个用户定义的函数,该函数返回custom_features_dict与训练数据中是否存在字符串值(from)相对应的布尔值。

对于给定的训练数据list,这给出了以下内容:dict

[
    {
       'contact':1,
       'demographic':1,
       'location':0
    },
    {
       'contact':1,
       'demographic':0,
       'location':1
    },
    {
       'contact':0,
       'demographic':1,
       'location':0
    },
] 

以上如何list用于实现fit和transform?

代码如下:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction import DictVectorizer
#from sklearn.metrics import accuracy_score
from sklearn.multiclass import OneVsRestClassifier
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from io import StringIO

data = StringIO(u'''text,contact,demographic,location
provide us with your date of birth and e-mail,1,1,0
contact details and location will be stored,0,1,1
date of birth should be before 2004,0,1,0''')

df = pd.read_csv(data)

custom_features_dict = {'contact':['contact details', 'e-mail'], 
                        'demographic':['gender', 'age', 'birth'],
                        'location':['location', 'geo']}

my_features = [
    {
       'contact':1,
       'demographic':1,
       'location':0
    },
    {
       'contact':1,
       'demographic':0,
       'location':1
    },
    {
       'contact':0,
       'demographic':1,
       'location':0
    },
]

bow_pipeline = Pipeline(
    steps=[
        ("tfidf", TfidfVectorizer(stop_words=stop_words)),
    ]
)

manual_pipeline = Pipeline(
    steps=[
        # This needs to be fixed
        ("custom_features", my_features),
        ("dict_vect", DictVectorizer()),
    ]
)

combined_features = FeatureUnion(
    transformer_list=[
        ("bow", bow_pipeline),
        ("manual", manual_pipeline),
    ]
)

final_pipeline = Pipeline([
            ('combined_features', combined_features),
            ('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)),
        ]
)

labels = ['contact', 'demographic', 'location']

for label in labels:
    final_pipeline.fit(df['text'], df[label]) 

标签: pythonscikit-learnnlptext-classificationmultilabel-classification

解决方案


您必须定义一个将您的文本作为输入的转换器。像这样的东西:

from sklearn.base import BaseEstimator, TransformerMixin

custom_features_dict = {'contact':['contact details', 'e-mail'], 
                   'demographic':['gender', 'age', 'birth'],
                   'location':['location', 'geo']}

#helper function which returns 1, if one of the words occures in the text, else 0
#you can add more words or categories to custom_features_dict if you want
def is_words_present(text, listofwords):
  for word in listofwords:
    if word in text:
      return 1
  return 0

class CustomFeatureTransformer(BaseEstimator, TransformerMixin):
    def __init__(self, custom_feature_dict):
       self.custom_feature_dict = custom_feature_dict
    def fit(self, x, y=None):
        return self    
    def transform(self, data):
        result_arr = []
        for text in data:
          arr = []
          for key in self.custom_feature_dict:
            arr.append(is_words_present(text, self.custom_feature_dict[key]))
          result_arr.append(arr)
        return result_arr

注意:这个 Transformer 直接生成一个数组,看起来像这样:[1, 0, 1],它不会生成字典,这让我们可以省去 DictVectorizer。

此外,我更改了处理多标签分类的方式,请参见此处

#first, i generate a new column in the dataframe, with all the labels per row:
def create_textlabels_array(row):
  arr = []
  for label in ['contact', 'demographic', 'location']:
    if row[label]==1:
      arr.append(label)
  return arr

df['textlabels'] = df.apply(create_textlabels_array, 1) 

#then we generate the binarized Labels:
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer().fit(df['textlabels'])
y = mlb.transform(df['textlabels'])

现在我们可以将所有内容添加到管道中:

bow_pipeline = Pipeline(
    steps=[
        ("tfidf", TfidfVectorizer(stop_words=stop_words)),
    ]
)

manual_pipeline = Pipeline(
    steps=[
        ("costum_vect", CustomFeatureTransformer(custom_features_dict)),
    ]
)

combined_features = FeatureUnion(
    transformer_list=[
        ("bow", bow_pipeline),
        ("manual", manual_pipeline),
    ]
)

final_pipeline = Pipeline([
        ('combined_features', combined_features),
        ('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)),
    ]
)

#train your pipeline
final_pipeline.fit(df['text'], y) 

#let's predict something: (Note: of course training data is a bit low in that examplecase here)
pred = final_pipeline.predict(["write an e-mail to our location please"])
print(pred) #output: [0, 1, 1] 

#reverse the predicted array to the actual labels:
print(mlb.inverse_transform(pred)) #output: [('demographic', 'location')]

推荐阅读