首页 > 解决方案 > 使用 train_test_split 与手动拆分数据时的不同结果

问题描述

我有一个 pandas 数据框,我想对其进行预测并获得每个特征的均方根误差。我正在关注手动拆分数据集的在线指南,但我认为使用train_test_splitfrom会更方便sklearn.model_selection。不幸的是,在手动拆分数据与使用train_test_split.

一个(希望)可重现的例子:

import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split

np.random.seed(0)
df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=['feature_1','feature_2','feature_3','feature_4'])
df['target'] = np.random.randint(2,size=100)
df2 = df.copy()

这是一个函数 ,knn_train_test它手动拆分数据、拟合模型、进行预测等:

def knn_train_test(train_col, target_col, df):
    knn = KNeighborsRegressor()
    np.random.seed(0)

    # Randomize order of rows in data frame.
    shuffled_index = np.random.permutation(df.index)
    rand_df = df.reindex(shuffled_index)

    # Divide number of rows in half and round.
    last_train_row = int(len(rand_df) / 2)

    # Select the first half and set as training set.
    # Select the second half and set as test set.
    train_df = rand_df.iloc[0:last_train_row]
    test_df = rand_df.iloc[last_train_row:]

    # Fit a KNN model using default k value.
    knn.fit(train_df[[train_col]], train_df[target_col])

    # Make predictions using model.
    predicted_labels = knn.predict(test_df[[train_col]])

    # Calculate and return RMSE.
    mse = mean_squared_error(test_df[target_col], predicted_labels)
    rmse = np.sqrt(mse)
    return rmse

rmse_results = {}
train_cols = df.columns.drop('target')

# For each column (minus `target`), train a model, return RMSE value
# and add to the dictionary `rmse_results`.
for col in train_cols:
    rmse_val = knn_train_test(col, 'target', df)
    rmse_results[col] = rmse_val

# Create a Series object from the dictionary so 
# we can easily view the results, sort, etc
rmse_results_series = pd.Series(rmse_results)
rmse_results_series.sort_values()

#Output
feature_3    0.541110
feature_2    0.548452
feature_4    0.559285
feature_1    0.569912
dtype: float64

现在,这是一个函数 knn_train_test2,它使用以下方法拆分数据train_test_split

def knn_train_test2(train_col, target_col, df2):

    knn = KNeighborsRegressor()
    np.random.seed(0)

    X_train, X_test, y_train, y_test = train_test_split(df2[[train_col]],df2[[target_col]], test_size=0.5)

    knn.fit(X_train,y_train)

    predictions = knn.predict(X_test)

    mse = mean_squared_error(y_test,predictions)

    rmse = np.sqrt(mse)

    return rmse

rmse_results = {}
train_cols = df2.columns.drop('target')

for col in train_cols:
    rmse_val = knn_train_test2(col, 'target', df2)
    rmse_results[col] = rmse_val


rmse_results_series = pd.Series(rmse_results)
rmse_results_series.sort_values()

# Output
feature_4    0.522303
feature_3    0.556417
feature_1    0.569210
feature_2    0.572713
dtype: float64

为什么我得到不同的结果?我想我一般误解了 split > train > test 过程,或者可能是误解/错误指定train_test_split。先感谢您

标签: pythonnumpymachine-learningscikit-learntrain-test-split

解决方案


您的自定义train_test_split实现与 scikit-learn 的实现不同,这就是为什么您会为相同的种子获得不同的结果。

在这里你可以找到官方的实现。首先值得注意的是,scikit-learn 默认会进行 10 次重新洗牌和拆分迭代。(检查n_splits参数)

只有当您的方法与 scitkit-learn 方法完全相同时,您才能期望对相同的种子具有相同的结果。


推荐阅读