r - 如何查看并行调整 mlr 的中间结果?
问题描述
mlr
在级别上使用和parallelMap
并行化时是否可以看到调整轮次的结果mlr.tuneParams
?
当我进行串行调整时,我会在控制台中看到每个超参数组合的结果(超参数、度量),因为 CV 完成。因此,如果我在 tuneParams 结果保存之前终止工作,我仍然有一些结果。
当我并行调整时,我不知道如何在作业终止的结果中看到中间结果。是否可以创建显示结果的日志文件?
谢谢!
解决方案
这对于parallelMap是不可能的。在后台,调用了mclapply()
(多核)或clusterMap()
(套接字),它们不允许工作人员的进度输出。
您可能想尝试依赖future包进行并行化的mlr3 。有了这个,你可以选择不同的并行后端,这可能有助于实现你想要的。
library("mlr")
#> Loading required package: ParamHelpers
library("parallelMap")
discrete_ps <- makeParamSet(
makeDiscreteParam("C", values = c(0.5, 1.0, 1.5, 2.0)),
makeDiscreteParam("sigma", values = c(0.5, 1.0, 1.5, 2.0))
)
ctrl <- makeTuneControlRandom(maxit = 5)
rdesc <- makeResampleDesc("CV", iters = 2L)
# socket mode ------------------------------------------------------------------
parallelStartSocket(2, level = "mlr.tuneParams")
#> Starting parallelization in mode=socket with cpus=2.
res <- tuneParams("classif.ksvm",
task = iris.task, resampling = rdesc,
par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#> Type len Def Constr Req Tunable Trafo
#> C discrete - - 0.5,1,1.5,2 - TRUE -
#> sigma discrete - - 0.5,1,1.5,2 - TRUE -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> Exporting objects to slaves for mode socket: .mlr.slave.options
#> Mapping in parallel: mode = socket; level = mlr.tuneParams; cpus = 2; elements = 5.
#> [Tune] Result: C=2; sigma=0.5 : mmce.test.mean=0.0600000
parallelStop()
#> Stopped parallelization. All cleaned up.
# sequential -------------------------------------------------------------------
res <- tuneParams("classif.ksvm",
task = iris.task, resampling = rdesc,
par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#> Type len Def Constr Req Tunable Trafo
#> C discrete - - 0.5,1,1.5,2 - TRUE -
#> sigma discrete - - 0.5,1,1.5,2 - TRUE -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> [Tune-x] 1: C=1.5; sigma=1.5
#> [Tune-y] 1: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune-x] 2: C=0.5; sigma=1.5
#> [Tune-y] 2: mmce.test.mean=0.0600000; time: 0.0 min
#> [Tune-x] 3: C=0.5; sigma=1.5
#> [Tune-y] 3: mmce.test.mean=0.0600000; time: 0.0 min
#> [Tune-x] 4: C=1; sigma=2
#> [Tune-y] 4: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune-x] 5: C=1; sigma=2
#> [Tune-y] 5: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune] Result: C=1; sigma=2 : mmce.test.mean=0.0466667
# multicore --------------------------------------------------------------------
parallelStartMulticore(2, level = "mlr.tuneParams")
#> Starting parallelization in mode=multicore with cpus=2.
res <- tuneParams("classif.ksvm",
task = iris.task, resampling = rdesc,
par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#> Type len Def Constr Req Tunable Trafo
#> C discrete - - 0.5,1,1.5,2 - TRUE -
#> sigma discrete - - 0.5,1,1.5,2 - TRUE -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> Mapping in parallel: mode = multicore; level = mlr.tuneParams; cpus = 2; elements = 5.
#> [Tune] Result: C=2; sigma=1.5 : mmce.test.mean=0.0466667
parallelStop()
#> Stopped parallelization. All cleaned up.
由reprex 包(v0.3.0)于 2019 年 12 月 26 日创建