首页 > 解决方案 > scikit 学习逻辑回归模型 tfidfvectorizer

问题描述

我正在尝试使用 scikit learn 和下面的代码创建逻辑回归模型。我使用 9 列作为特征 (X) 和 1 列作为标签 (Y)。尝试拟合时出现错误“ValueError:找到具有不一致样本数的输入变量:[9, 560000]”即使以前 X 和 Y 的长度相同,如果我使用 x.transpose() 我得到一个不同的错误“AttributeError:'int'对象没有属性'lower'”。我假设这可能与 tfidfvectorizer 有关,我这样做是因为 3 列包含单个单词并且不起作用。这是这样做的正确方法还是应该分别转换列中的单词然后使用train_test_split?如果不是,为什么我会收到错误,我怎么能找到它们。

df = pd.read_csv("UNSW-NB15_1.csv",header=None, names=cols, encoding = "UTF-8",low_memory=False) 

df.to_csv('netraf.csv')
csv = 'netraf.csv'
my_df = pd.read_csv(csv)

x_features = my_df.columns[1:10]
x_data = my_df[x_features]
Y = my_df["Label"]

x_train, x_validation, y_train, y_validation = 
model_selection.train_test_split(x_data, Y, test_size=0.2, random_state=7)

tfidf_vectorizer = TfidfVectorizer()
lr = LogisticRegression()
tfidf_lr_pipe = Pipeline([('tfidf', tfidf_vectorizer), ('lr', lr)])

tfidf_lr_pipe.fit(x_train, y_train)  

标签: pythonmachine-learningscikit-learnlogistic-regressiontfidfvectorizer

解决方案


您尝试做的事情很不寻常,因为TfidfVectorizer它旨在从文本中提取数字特征。但是,如果您并不真正关心并且只想让您的代码正常工作,那么一种方法是将您的数字数据转换为字符串并配置TfidfVectorizer为接受标记化数据:

import pandas as pd
from sklearn import model_selection
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline

cols = ['srcip','sport','dstip','dsport','proto','service','smeansz','dmeansz','attack_cat','Label']
df = pd.read_csv("UNSW-NB15_1.csv",header=None, names=cols, encoding = "UTF-8",low_memory=False) 

df.to_csv('netraf.csv')
csv = 'netraf.csv'
my_df = pd.read_csv(csv)

# convert all columns to string like we don't care
for col in my_df.columns:
    my_df[col] = my_df[col].astype(str)

# replace nan with empty string like we don't care
for col in my_df.columns[my_df.isna().any()].tolist():
    my_df.loc[:, col].fillna('', inplace=True)

x_features = my_df.columns[1:10]
x_data = my_df[x_features]
Y = my_df["Label"]

x_train, x_validation, y_train, y_validation = model_selection.train_test_split(
    x_data.values, Y.values, test_size=0.2, random_state=7)

# configure TfidfVectorizer to accept tokenized data
# reference http://www.davidsbatista.net/blog/2018/02/28/TfidfVectorizer/
tfidf_vectorizer = TfidfVectorizer(
    analyzer='word',
    tokenizer=lambda x: x,
    preprocessor=lambda x: x,
    token_pattern=None)

lr = LogisticRegression()
tfidf_lr_pipe = Pipeline([('tfidf', tfidf_vectorizer), ('lr', lr)])
tfidf_lr_pipe.fit(x_train, y_train)

话虽如此,我建议您使用另一种方法对数据集进行特征工程。例如,您可以尝试将标称数据(例如 IP、端口)编码为数值。


推荐阅读