首页 > 解决方案 > 具有分层折叠的嵌套交叉验证

问题描述

我正在尝试使用 scikit-learn 管道和嵌套交叉验证来实现随机森林回归器。该数据集是关于房价的,具有几个特征(一些数字其他分类)和一个连续目标变量(median_house_value)。

Data columns (total 10 columns):
 #   Column              Non-Null Count  Dtype  
---  ------              --------------  -----  
 0   longitude           20640 non-null  float64
 1   latitude            20640 non-null  float64
 2   housing_median_age  20640 non-null  float64
 3   total_rooms         20640 non-null  float64
 4   total_bedrooms      20433 non-null  float64
 5   population          20640 non-null  float64
 6   households          20640 non-null  float64
 7   median_income       20640 non-null  float64
 8   median_house_value  20640 non-null  float64
 9   ocean_proximity     20640 non-null  object 

我决定手动创建两个分层的 5 折拆分(嵌套 cv 的内、外循环)。分层基于 median_income 特征的修改版本:

df.insert(9, "income_cat", 
                  pd.cut(df["median_income"],bins=[0., 1.5, 3.0, 4.5, 6., np.inf], labels=[1,2,3,4,5]))

这是拆分折叠的代码

cv1_5 = StratifiedShuffleSplit(n_splits = 5, test_size = .2, random_state = 42)
cv1_splits = []

# create first 5 stratified folds indices
for train_index, test_index in cv1_5.split(df, df["income_cat"]):
    cv1_splits.append((train_index, test_index))

cv2_5 = StratifiedShuffleSplit(n_splits = 5, test_size = .2, random_state = 43)
cv2_splits = []

# create second 5 stratified folds indices
for train_index, test_index in cv2_5.split(df, df["income_cat"]):
    cv2_splits.append((train_index, test_index))
    
# set initial dataset
X = df.drop("median_house_value", axis=1)
y = df["median_house_value"].copy()

这是预处理管道

# create preprocess pipe
preprocess_pipe = Pipeline(
    [
        ("ctransformer", ColumnTransformer([
                ( 
                    "num_pipe", 
                    Pipeline([
                        ("imputer", SimpleImputer(strategy="median")),
                        ("scaler", StandardScaler())
                    ]), 
                    list(X.select_dtypes(include=[np.number]))
                ),
                ( 
                    "cat_pipe", 
                    Pipeline([
                        ("encoder", OneHotEncoder()),
                    ]), 
                    ["ocean_proximity"])
            ])
        ),
    ]
)

这是最后的管道(包括预处理管道)

pipe = Pipeline([
    ("preprocess", preprocess_pipe),
    ("model", RandomForestRegressor())
])

我正在使用嵌套交叉验证来调整最终管道的超参数并计算泛化误差

这是参数网格

param_grid = [
    {
        "preprocess__ctransformer__num_pipe__imputer__strategy": ["mean","median"],
        "model__n_estimators": [3, 10, 30, 50, 100, 150, 300], "model__max_features": [2,4,6,8]
    }
]

这是最后一步

grid_search = GridSearchCV(pipe, param_grid, cv = cv1_splits, 
    scoring = "neg_mean_squared_error", 
    return_train_score = True)

clf = grid_search.fit(X, y)

generalization_error = cross_val_score(clf.best_estimator_, X = X, y = y, cv = cv2_splits)
generalization_error

现在,出现了故障(前面代码片段的底部两行):

如果我按照 scikit-learn 说明(链接),我应该写:

generalization_error = cross_val_score(clf, X = X, y = y, cv = cv2_splits, scoring = "neg_mean_squared_error")
    generalization_error

不幸的是,调用cross_val_score(clf, X = X...)给了我一个错误(索引超出了训练/测试拆分的范围),并且泛化错误数组仅包含 NaN。

另一方面,如果我这样写:

generalization_error = cross_val_score(clf.best_estimator_, X = X, y = y, cv = cv2_splits, scoring = "neg_mean_squared_error")
        generalization_error

该脚本完美运行,我可以看到泛化错误数组充满了分数。我能坚持最后一种做事方式,还是整个过程有问题?

标签: pythonscikit-learn

解决方案


对我来说,这里的问题可能在于使用cv1_splitsand cv2_splits,而不是cv1_5and cv2_5(特别是它是cv1_splits导致问题的使用)。

通常,cross_val_score()调用估计器fit()的克隆clf;在这种情况下,它是一个 GridSearchCV 估计器,可以拟合到多个X_inner_train集合( X 的子集根据 分层,X 的cv1_splits相同维度 - 请参见此处的符号)。从 X 构建,它包含与cv1_splitsX 维度一致的索引,但可能与X_inner_train维度不一致。

相反,通过传递cv1_5给 GridSearchCV 估计器,估计器本身负责连贯地拆分内部训练集(参见此处以供参考)。


推荐阅读