首页 > 解决方案 > 为什么 optuna 在计算完所有超参数后会停留在试验 2(trial_id=3)?

问题描述

我正在使用 optuna 来调整 xgboost 模型的超参数。我发现它在试验 2 (trial_id=3) 上停留了很长时间(244 分钟)。但是当我查看记录试验数据的 SQLite 数据库时,我发现除了试验 2 的均方误差值之外,所有试验 2 (trial_id=3) 超参数都已计算出来。而 optuna 试验 2 (trial_id=3) 似乎卡在了那一步。我想知道为什么会这样?以及如何解决这个问题?

这是代码

def xgb_hyperparameter_tuning(): 
    def objective(trial):
        params = {
            "n_estimators": trial.suggest_int("n_estimators", 1000, 10000, step=100),
            "booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]), 
            "max_depth": trial.suggest_int("max_depth", 1, 20, step=1),
            "learning_rate": trial.suggest_float("learning_rate", 0.0001, 0.2, step=0.001),
            "min_child_weight": trial.suggest_float("min_child_weight", 1.0, 20.0, step=1.0),
            "colsample_bytree": trial.suggest_float("colsample_bytree", 0.1, 1.0, step=0.1),
            "subsample": trial.suggest_float("subsample",0.1, 1.0, step=0.1),
            "reg_alpha": trial.suggest_float("reg_alpha", 0.0, 11.0, step=0.1),        
            "reg_lambda": trial.suggest_float("reg_lambda", 0.0, 11.0, step=0.1),
            "num_parallel_tree": 10,
            "random_state": 16,
            "n_jobs": 10,
            "early_stopping_rounds": 1000,
        }

        model = XGBRegressor(**params)
        mse = make_scorer(mean_squared_error)
        cv = cross_val_score(estimator=model, X=X_train, y=log_y_train, cv=20, scoring=mse, n_jobs=-1)
        return cv.mean()

    study = optuna.create_study(study_name="HousePriceCompetitionXGB", direction="minimize", storage="sqlite:///house_price_competition_xgb.db", load_if_exists=True)
    study.optimize(objective, n_trials=100,)
    return None

xgb_hyperparameter_tuning()

这是输出

[I 2021-11-16 10:06:27,522] A new study created in RDB with name: HousePriceCompetitionXGB
[I 2021-11-16 10:08:40,050] Trial 0 finished with value: 0.03599314763859092 and parameters: {'n_estimators': 5800, 'booster': 'gblinear', 'max_depth': 4, 'learning_rate': 0.1641, 'min_child_weight': 17.0, 'colsample_bytree': 0.4, 'subsample': 0.30000000000000004, 'reg_alpha': 10.8, 'reg_lambda': 7.6000000000000005}. Best is trial 0 with value: 0.03599314763859092.
[I 2021-11-16 10:11:55,830] Trial 1 finished with value: 0.028514652199592445 and parameters: {'n_estimators': 6600, 'booster': 'gblinear', 'max_depth': 17, 'learning_rate': 0.0821, 'min_child_weight': 20.0, 'colsample_bytree': 0.7000000000000001, 'subsample': 0.2, 'reg_alpha': 1.2000000000000002, 'reg_lambda': 7.2}. Best is trial 1 with value: 0.028514652199592445.

这是sqlite数据库trial_values表的数据

trial_value_id trial_id 客观的 价值
1 1 0 0.0359931476385909
2 2 0 0.0285146521995924

这是sqlite数据库trial_params表的数据,你可以看到所有的试验2(trial_id=3)超参数已经计算出来

param_id trial_id 参数名称 参数值 分布_json
1 1 n_estimators 5800.0 {“名称”:“IntUniformDistribution”,“属性”:{“低”:1000,“高”:10000,“步”:100}}
2 1 助推器 1.0 {“名称”:“CategoricalDistribution”,“属性”:{“选择”:[“gbtree”,“gblinear”,“dart”]}}
3 1 最大深度 4.0 {“名称”:“IntUniformDistribution”,“属性”:{“低”:1,“高”:20,“步骤”:1}}
4 1 学习率 0.1641 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0001,“高”:0.1991,“q”:0.001}}
5 1 min_child_weight 17.0 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:1.0,“高”:20.0,“q”:1.0}}
6 1 colsample_bytree 0.4 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.1,“高”:1.0,“q”:0.1}}
7 1 子样本 0.3 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.1,“高”:1.0,“q”:0.1}}
8 1 reg_alpha 10.8 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0,“高”:11.0,“q”:0.1}}
9 1 reg_lambda 7.6 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0,“高”:11.0,“q”:0.1}}
10 2 n_estimators 6600.0 {“名称”:“IntUniformDistribution”,“属性”:{“低”:1000,“高”:10000,“步”:100}}
11 2 助推器 1.0 {“名称”:“CategoricalDistribution”,“属性”:{“选择”:[“gbtree”,“gblinear”,“dart”]}}
12 2 最大深度 17.0 {“名称”:“IntUniformDistribution”,“属性”:{“低”:1,“高”:20,“步骤”:1}}
13 2 学习率 0.0821 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0001,“高”:0.1991,“q”:0.001}}
14 2 min_child_weight 20.0 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:1.0,“高”:20.0,“q”:1.0}}
15 2 colsample_bytree 0.7 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.1,“高”:1.0,“q”:0.1}}
16 2 子样本 0.2 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.1,“高”:1.0,“q”:0.1}}
17 2 reg_alpha 1.2 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0,“高”:11.0,“q”:0.1}}
18 2 reg_lambda 7.2 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0,“高”:11.0,“q”:0.1}}
19 3 n_estimators 7700.0 {“名称”:“IntUniformDistribution”,“属性”:{“低”:1000,“高”:10000,“步”:100}}
20 3 助推器 2.0 {“名称”:“CategoricalDistribution”,“属性”:{“选择”:[“gbtree”,“gblinear”,“dart”]}}
21 3 最大深度 4.0 {“名称”:“IntUniformDistribution”,“属性”:{“低”:1,“高”:20,“步骤”:1}}
22 3 学习率 0.1221 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0001,“高”:0.1991,“q”:0.001}}
23 3 min_child_weight 3.0 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:1.0,“高”:20.0,“q”:1.0}}
24 3 colsample_bytree 0.5 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.1,“高”:1.0,“q”:0.1}}
25 3 子样本 0.1 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.1,“高”:1.0,“q”:0.1}}
26 3 reg_alpha 10.8 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0,“高”:11.0,“q”:0.1}}
27 3 reg_lambda 1.1 {“名称”:“DiscreteUniformDistribution”,“属性”:{“低”:0.0,“高”:11.0,“q”:0.1}}

标签: machine-learningscikit-learnhyperparametersoptuna

解决方案


虽然我不是 100% 确定,但我想我知道发生了什么。

发生此问题是因为某些参数不适合某些参数,booster type并且试验将nan作为结果返回并停留在计算MSE分数的步骤。

要解决此问题,您只需删除"booster": "dart".

换句话说,使用"booster": trial.suggest_categorical("booster", ["gbtree", "gblinear"]), 而不是"booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]), 可以解决问题。

当我调整我的 LightGBMRegressor 模型时,我得到了这个想法。我发现许多试验都失败了,因为这些试验返回nan并且它们都使用相同的"boosting_type"="rf". 所以我删除了rf,所有 100 次试验都完成了,没有任何错误。然后我寻找XGBRegressor我在上面发布的问题。我发现所有被卡住的试验都一样"booster":"dart"。所以我删除了dart,并XGBRegressor正常运行。


推荐阅读