首页 > 解决方案 > 如何在将 GridSearchCV 与 TimeSeriesSplit 一起使用时正确使用 Scaler

问题描述

我目前正在使用GridSearchCV并且TimeSeriesSplit喜欢这样,以便将我的数据拆分为 5 个 CV 拆分。

X = data.iloc[:, 0:8]
y = data.iloc[:, 8:9]

SVR_parameters = [{'kernel': ['rbf'],
               'gamma': [.01,.001,1],
               'C': [1,100]}]

gsc = GridSearchCV(SVR(), param_grid=SVR_parameters, scoring='neg_mean_squared_error',
                cv=TimeSeriesSplit(n_splits=5).split(X), verbose=10, n_jobs=-1, refit=True)
gsc.fit(X, y)
gsc_dataframe = pd.DataFrame(gsc.cv_results_)

我的理解是,当使用缩放器时,您希望仅将缩放器安装在训练集上,并使用该缩放器对象转换测试集,以防止数据泄漏,所以基本上是这样的:

            scaler_X = StandardScalar()
            scaler_y = StandardScalar()
            scaler_X.fit(X_train)
            scaler_y.fit(y_train)
            X_train, X_test = scaler_X.transform(X_train), scaler_X.transform(X_test)
            y_train, y_test = scaler_y.transform(y_train), scaler_y.transform(y_test)

我的问题是:如果我执行这种类型的缩放操作,我将如何GridSearchCV分割我的整个数据集?如果我只是将对象X中的变量替换为- 它会省略,对吗?gscX_trainX_test

我想知道是否有适当的方法来扩展数据,同时仍然使用所有数据GridSearchCV

我希望我解释得足够清楚。如果您需要任何澄清,请告诉我。


更新:

添加完整代码以帮助更好地解释

X = data.iloc[:, 0:8]
y = data.iloc[:, 8:9]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, shuffle=False)

test_index = X_test.index.values.tolist()

scaler_x = StandardScaler()
scaler_y = StandardScaler()
scaler_x.fit(X_train)
scaler_y.fit(y_train)

X_train, X_test = scaler_x.transform(X_train), scaler_x.transform(X_test)
y_train, y_test = scaler_y.transform(y_train), scaler_y.transform(y_test)




SVR_parameters = [{'kernel': ['rbf'],
               'gamma': [.1, .01, .001],
               'C': [100,500,1000]}]

gsc = GridSearchCV(SVR(), param_grid=SVR_parameters, scoring='neg_mean_squared_error',
   cv=TimeSeriesSplit(n_splits=5).split(X_train),verbose=10, n_jobs=-1, refit=True)

gsc.fit(X_train, y_train)
gsc_dataframe = pd.DataFrame(gsc.cv_results_)
y_pred = gsc.predict(X_test)
y_pred = scaler_y.inverse_transform(y_pred)
y_test = scaler_y.inverse_transform(y_test)
mae = round(metrics.mean_absolute_error(y_test,y_pred),2)
mse = round(metrics.mean_squared_error(y_test, y_pred),2)
y_df = pd.DataFrame(index=pd.to_datetime(test_index))
y_pred = y_pred.reshape(len(y_pred), )
y_test = y_test.reshape(len(y_test), )
y_df['Model'] = y_pred
y_df['Actual'] = y_test
y_df.plot(title='{}'.format(gsc.cv_results_['params'][gsc.best_index_]))

标签: pythonmachine-learningscikit-learncross-validationgrid-search

解决方案


使用管道https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html):

pipe = Pipeline([
        ('scale', StandardScaler()),
        ('clf', SVR())])

param_grid = dict(clf__gamma = [.01,.001,1],
                  clf__C = [1,100],
                  clf__kernel = ['rbf','linear'])

gsc = GridSearchCV(pipe, param_grid = param_grid, scoring='neg_mean_squared_error',
            cv=TimeSeriesSplit(n_splits=5).split(X), verbose=10, n_jobs=-1, refit=True)

gsc.fit(X,y)
print(gsc.best_estimator_)

另请参阅此帖子以了解幕后步骤:在 scikit-learn (sklearn) 中的 Pipeline 中应用 StandardScaler


推荐阅读