首页 > 解决方案 > 轻型 GBM 模型的贝叶斯优化

问题描述

我能够通过贝叶斯优化成功地提高我的 XGBoost 模型的性能,但是在使用 Light GBM(我的首选)时我可以通过贝叶斯优化实现的最佳效果比我通过使用它的默认超参数所能达到的效果更差并遵循标准的提前停止方法。

通过贝叶斯优化进行调整时,我确保在搜索表面中包含算法的默认超参数,以供参考。

下面的代码显示了来自 Light GBM 模型的 RMSE,其中默认超参数使用 seaborn 的 diamonds 数据框作为我的工作示例:

#pip install bayesian-optimization

import seaborn as sns
from sklearn.model_selection import train_test_split
import lightgbm as lgb
from bayes_opt import BayesianOptimization

df = sns.load_dataset('diamonds')

df["color"] = df["color"].astype('category')
df["color_cat"] = df["color"].cat.codes
df = df.drop(["color"],axis = 1)

df["cut"] = df["cut"].astype('category')
df["cut_cat"] = df["cut"].cat.codes
df = df.drop(["cut"],axis = 1)

df["clarity"] = df["clarity"].astype('category')
df["clarity_cat"] = df["clarity"].cat.codes
df = df.drop(["clarity"],axis = 1)

y = df['price']
X = df.drop(['price'], axis=1)

seed = 7
test_size = 0.3
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size,random_state=seed)

train_lgb = lgb.Dataset(X_train, y_train)
eval_lgb = lgb.Dataset(X_test, y_test, reference = train_lgb)

params = { 'objective': 'regression',
  'metric': 'RMSE',
  'learning_rate': 0.02}
lgb_reg = lgb.train(params, train_lgb, num_boost_round = 10000, early_stopping_rounds=50, verbose_eval = 100, valid_sets=eval_lgb)

结果

OUT:
Training until validation scores don't improve for 50 rounds.
Early stopping, best iteration is:
[1330 (n_estimators)] valid_0's rmse: 538.728

这里我尝试实现贝叶斯优化和得到的 RMSE 值:

def modelFitter(colsampleByTree, subsample,maxDepth, num_leaves):   
    model = lgb.LGBMRegressor(learning_rate=0.02, n_estimators=10000, max_depth=maxDepth.astype("int32"), subsample=subsample, colsample_bytree=colsampleByTree,num_leaves=num_leaves.astype("int32"))

    evalSet  = [(X_test, y_test)]
    model.fit(X_train, y_train, eval_metric="rmse", eval_set=evalSet, early_stopping_rounds=50, verbose=False)

    bestScore = model.best_score_[list(model.best_score_.keys())[0]]['rmse']

    return -bestScore

# Bounded region of parameter space
pbounds = {'colsampleByTree': (0.8,1.0), 'subsample': (0.8,1.0), 'maxDepth': (2,5), 'num_leaves': (24, 45)}

optimizer = BayesianOptimization(
    f=modelFitter,
    pbounds=pbounds,
    random_state=1)

optimizer.maximize(init_points=5,n_iter=5)  #n_iter=bayesian, init_points=random

结果

iter    |  target   | colsam... | maxDepth  | num_le... | subsample |
-------------------------------------------------------------------------
|  1        | -548.7    |  0.8834   |  4.161    |  24.0     |  0.8605   |
|  2        | -642.4    |  0.8294   |  2.277    |  27.91    |  0.8691   |
|  3        | -583.5    |  0.8794   |  3.616    |  32.8     |  0.937    |
|  4        | -548.7    |  0.8409   |  4.634    |  24.58    |  0.9341   |
|  5        | -583.5    |  0.8835   |  3.676    |  26.95    |  0.8396   |
|  6        | -548.7    |  0.8625   |  4.395    |  24.29    |  0.8968   |
|  7        | -548.7    |  0.8435   |  4.603    |  24.42    |  0.9298   |
|  8        | -551.5    |  0.9271   |  4.266    |  24.11    |  0.8035   |
|  9        | -548.7    |  0.8      |  4.11     |  24.08    |  1.0      |
|  10       | -548.7    |  0.8      |  4.44     |  24.45    |  0.9924   |

贝叶斯优化期间生成的 RMSE(-1 x “目标”)应该比 LightGBM 的默认值生成的要好,但我无法获得更好的 RMSE(寻找更好/高于通过上述“正常”实现的 -538.728提前停止过程)。

maxDepth 和 num_leaves 应该是整数;看起来有一张公开票可以强制执行(即引入“ptypes”):https ://github.com/fmfn/BayesianOptimization/pull/131/files

为什么贝叶斯优化似乎没有用 LightGBM 找到更好的解决方案,但它却用 XGBoost 找到了更好的解决方案?

标签: pythonpandasbayesianhyperparameterslightgbm

解决方案


对于回归,我设法使用 lightgbm 包中的 cv 函数实现了改进的结果。

我的“黑匣子”函数BayesianOptimization()返回 l1-mean 的最小值。

def black_box_lgbm():
    params = {...} #Your params here
    cv_results = lgb.cv(params, train_data, nfold=5, metrics='mae', verbose_eval = 200, stratified=False)
    return min(cv_results['l1-mean'])

在调用并获取具有最低 l1-error 的结果后maximize()BayesianOptimization()我重新训练了一个模型并与默认值进行了比较。与默认值相比,这始终导致较低的 MSE。


推荐阅读