CRAN Package Check Results for Package mlexperiments

Last updated on 2024-11-03 11:49:32 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.0.4 10.91 156.25 167.16 OK
r-devel-linux-x86_64-debian-gcc 0.0.4 0.34 2.29 2.63 ERROR
r-devel-linux-x86_64-fedora-clang 0.0.4 311.16 OK
r-devel-linux-x86_64-fedora-gcc 0.0.4 379.38 OK
r-devel-windows-x86_64 0.0.4 11.00 314.00 325.00 OK
r-patched-linux-x86_64 0.0.4 10.05 183.71 193.76 OK
r-release-linux-x86_64 0.0.4 9.85 114.85 124.70 ERROR
r-release-macos-arm64 0.0.4 208.00 OK
r-release-macos-x86_64 0.0.4 361.00 OK
r-release-windows-x86_64 0.0.4 12.00 319.00 331.00 OK
r-oldrel-macos-arm64 0.0.4 153.00 OK
r-oldrel-macos-x86_64 0.0.4 583.00 OK
r-oldrel-windows-x86_64 0.0.4 16.00 398.00 414.00 OK

Check Details

Version: 0.0.4
Check: whether package can be installed
Result: ERROR Installation failed. Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.0.4
Check: package dependencies
Result: WARN Skipping vignette re-building Packages suggested but not available for checking: 'ParBayesianOptimization', 'quarto' VignetteBuilder package required for checking but not installed: ‘quarto’ Flavor: r-release-linux-x86_64

Version: 0.0.4
Check: tests
Result: ERROR Running ‘testthat.R’ [80s/106s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. CV fold: Fold1 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold2 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold3 Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. Classification: using 'classification error rate' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 7 | WARN 0 | SKIP 1 | PASS 56 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-knn.R:116:5'): test bayesian tuner, initGrid - knn ───────────── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─knn_optimization$execute(k = 3) at test-knn.R:116:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-knn.R:184:5'): test bayesian tuner, initPoints - LearnerKnn ──── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─knn_optimization$execute(k = 3) at test-knn.R:184:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-knn.R:257:5'): test nested cv, bayesian - knn ────────────────── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─knn_optimization$execute() at test-knn.R:257:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_classification.R:125:5'): test bayesian tuner, initGrid, classification - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute(k = 3) at test-rpart_classification.R:125:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_classification.R:205:5'): test nested cv, bayesian, classification - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute() at test-rpart_classification.R:205:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_regression.R:125:5'): test bayesian tuner, initGrid, regression - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute(k = 3) at test-rpart_regression.R:125:5 2. └─private$select_optimizer(self, private) 3. └─BayesianOptimizer$new(...) 4. └─mlexperiments (local) initialize(...) ── Error ('test-rpart_regression.R:203:5'): test nested cv, bayesian, regression - rpart ── Error: Package "ParBayesianOptimization" must be installed to use 'strategy = "bayesian"'. Backtrace: ▆ 1. └─rpart_optimization$execute() at test-rpart_regression.R:203:5 2. └─mlexperiments:::.run_cv(self = self, private = private) 3. └─mlexperiments:::.fold_looper(self, private) 4. ├─base::do.call(private$cv_run_model, run_args) 5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`) 6. ├─base::do.call(.cv_run_nested_model, args) 7. └─mlexperiments (local) `<fn>`(...) 8. └─hparam_tuner$execute(k = self$k_tuning) 9. └─private$select_optimizer(self, private) 10. └─BayesianOptimizer$new(...) 11. └─mlexperiments (local) initialize(...) [ FAIL 7 | WARN 0 | SKIP 1 | PASS 56 ] Error: Test failures Execution halted Flavor: r-release-linux-x86_64

Version: 0.0.4
Check: package vignettes
Result: NOTE Package has ‘vignettes’ subdirectory but apparently no vignettes. Perhaps the ‘VignetteBuilder’ information is missing from the DESCRIPTION file? Flavor: r-release-linux-x86_64