The goal of this vignette is explain how to use ResamplingSameOtherSizesCV for various kinds of cross-validation.

Simulations

We begin with a simple simulated data set.

Comparing training on Same/Other/All subsets

N <- 2100
abs.x <- 70
set.seed(2)
x.vec <- runif(N, -abs.x, abs.x)
str(x.vec)
#>  num [1:2100] -44.1 28.3 10.3 -46.5 62.1 ...
library(data.table)
(task.dt <- data.table(
  x=x.vec,
  y = sin(x.vec)+rnorm(N,sd=0.5)))
#>               x           y
#>           <num>       <num>
#>    1: -44.11648 -0.40781530
#>    2:  28.33237 -0.08520601
#>    3:  10.26569 -1.23266284
#>    4: -46.47273 -1.36225125
#>    5:  62.13751 -1.33779346
#>   ---                      
#> 2096:  60.83765 -0.10678010
#> 2097:  55.71469 -0.92403513
#> 2098:  14.31045  1.04519820
#> 2099:  27.18008  1.67815828
#> 2100:  23.67202 -0.26881102
if(require(ggplot2)){
  text.size <- 6
  my_theme <- theme_bw(20)
  theme_set(my_theme)
  ggplot()+
    geom_point(aes(
      x, y),
      shape=1,
      data=task.dt)
}
#> Loading required package: ggplot2

plot of chunk simulationScatter

Above we see a scatterplot of the simulated data. The goal of the learning algorithm will be to predict y from x.

The code below assigns three test groups to the randomly simulated data.

atomic.group.size <- 2
task.dt[, agroup := rep(seq(1, N/atomic.group.size), each=atomic.group.size)][]
#>               x           y agroup
#>           <num>       <num>  <int>
#>    1: -44.11648 -0.40781530      1
#>    2:  28.33237 -0.08520601      1
#>    3:  10.26569 -1.23266284      2
#>    4: -46.47273 -1.36225125      2
#>    5:  62.13751 -1.33779346      3
#>   ---                             
#> 2096:  60.83765 -0.10678010   1048
#> 2097:  55.71469 -0.92403513   1049
#> 2098:  14.31045  1.04519820   1049
#> 2099:  27.18008  1.67815828   1050
#> 2100:  23.67202 -0.26881102   1050
task.dt[, random_group := rep(
  rep(c("A","B","B","C","C","C","C"), each=atomic.group.size),
  l=.N
)][]
#>               x           y agroup random_group
#>           <num>       <num>  <int>       <char>
#>    1: -44.11648 -0.40781530      1            A
#>    2:  28.33237 -0.08520601      1            A
#>    3:  10.26569 -1.23266284      2            B
#>    4: -46.47273 -1.36225125      2            B
#>    5:  62.13751 -1.33779346      3            B
#>   ---                                          
#> 2096:  60.83765 -0.10678010   1048            C
#> 2097:  55.71469 -0.92403513   1049            C
#> 2098:  14.31045  1.04519820   1049            C
#> 2099:  27.18008  1.67815828   1050            C
#> 2100:  23.67202 -0.26881102   1050            C
table(group.tab <- task.dt$random_group)
#> 
#>    A    B    C 
#>  300  600 1200

The output above shows the number of rows in each random group. Below we define a task,

reg.task <- mlr3::TaskRegr$new(
  "sin", task.dt, target="y")
reg.task$col_roles$group <- "agroup"
reg.task$col_roles$stratum <- "random_group"
reg.task$col_roles$feature <- "x"

Note that if we assign the subset role at this point, we will get an error, because this is not a standard mlr3 role.

reg.task$col_roles$subset <- "random_group" 
#> Error in .__Task__col_roles(self = self, private = private, super = super, : Assertion on 'names(rhs)' failed: Names must be a permutation of set {'feature','target','name','order','stratum','group','offset','weights_learner','weights_measure'}, but has extra elements {'subset'}.

Below we define the cross-validation object, which loads the mlr3resampling package, and then we assign the random group column to be used as the subset role.

soak_default <- mlr3resampling::ResamplingSameOtherSizesCV$new()
reg.task$col_roles$subset <- "random_group" 

Below we instantiate a clone of the resampler, in order to show details about how it works (but normally you should not instantiate it yourself, as this will be done automatically inside the call to mlr3::benchmark).

soak_default$clone()$instantiate(reg.task)$instance$iteration.dt
#>     test.subset train.subsets groups test.fold                             test
#>          <char>        <char>  <int>     <int>                           <list>
#>  1:           A           all    700         1       43,44,57,58,71,72,...[100]
#>  2:           B           all    700         1        3, 4, 5, 6,17,18,...[200]
#>  3:           C           all    700         1       23,24,25,26,37,38,...[400]
#>  4:           A           all    700         2        1, 2,15,16,29,30,...[100]
#>  5:           B           all    700         2       33,34,47,48,61,62,...[200]
#>  6:           C           all    700         2       13,14,21,22,35,36,...[400]
#>  7:           A           all    700         3  99,100,155,156,169,170,...[100]
#>  8:           B           all    700         3       19,20,45,46,75,76,...[200]
#>  9:           C           all    700         3        7, 8, 9,10,11,12,...[400]
#> 10:           A         other    600         1       43,44,57,58,71,72,...[100]
#> 11:           B         other    500         1        3, 4, 5, 6,17,18,...[200]
#> 12:           C         other    300         1       23,24,25,26,37,38,...[400]
#> 13:           A         other    600         2        1, 2,15,16,29,30,...[100]
#> 14:           B         other    500         2       33,34,47,48,61,62,...[200]
#> 15:           C         other    300         2       13,14,21,22,35,36,...[400]
#> 16:           A         other    600         3  99,100,155,156,169,170,...[100]
#> 17:           B         other    500         3       19,20,45,46,75,76,...[200]
#> 18:           C         other    300         3        7, 8, 9,10,11,12,...[400]
#> 19:           A          same    100         1       43,44,57,58,71,72,...[100]
#> 20:           B          same    200         1        3, 4, 5, 6,17,18,...[200]
#> 21:           C          same    400         1       23,24,25,26,37,38,...[400]
#> 22:           A          same    100         2        1, 2,15,16,29,30,...[100]
#> 23:           B          same    200         2       33,34,47,48,61,62,...[200]
#> 24:           C          same    400         2       13,14,21,22,35,36,...[400]
#> 25:           A          same    100         3  99,100,155,156,169,170,...[100]
#> 26:           B          same    200         3       19,20,45,46,75,76,...[200]
#> 27:           C          same    400         3        7, 8, 9,10,11,12,...[400]
#>     test.subset train.subsets groups test.fold                             test
#>          <char>        <char>  <int>     <int>                           <list>
#>                           train  seed n.train.groups iteration Train_subsets
#>                          <list> <int>          <int>     <int>        <fctr>
#>  1:  1, 2, 7, 8, 9,10,...[1400]     1            700         1           all
#>  2:  1, 2, 7, 8, 9,10,...[1400]     1            700         2           all
#>  3:  1, 2, 7, 8, 9,10,...[1400]     1            700         3           all
#>  4:       3,4,5,6,7,8,...[1400]     1            700         4           all
#>  5:       3,4,5,6,7,8,...[1400]     1            700         5           all
#>  6:       3,4,5,6,7,8,...[1400]     1            700         6           all
#>  7:       1,2,3,4,5,6,...[1400]     1            700         7           all
#>  8:       1,2,3,4,5,6,...[1400]     1            700         8           all
#>  9:       1,2,3,4,5,6,...[1400]     1            700         9           all
#> 10:  7, 8, 9,10,11,12,...[1200]     1            600        10         other
#> 11:  1, 2, 7, 8, 9,10,...[1000]     1            500        11         other
#> 12:   1, 2,15,16,19,20,...[600]     1            300        12         other
#> 13:       3,4,5,6,7,8,...[1200]     1            600        13         other
#> 14:  7, 8, 9,10,11,12,...[1000]     1            500        14         other
#> 15:   3, 4, 5, 6,17,18,...[600]     1            300        15         other
#> 16:  3, 4, 5, 6,13,14,...[1200]     1            600        16         other
#> 17:  1, 2,13,14,15,16,...[1000]     1            500        17         other
#> 18:        1,2,3,4,5,6,...[600]     1            300        18         other
#> 19:   1, 2,15,16,29,30,...[200]     1            100        19          same
#> 20:  19,20,33,34,45,46,...[400]     1            200        20          same
#> 21:   7, 8, 9,10,11,12,...[800]     1            400        21          same
#> 22:  43,44,57,58,71,72,...[200]     1            100        22          same
#> 23:   3, 4, 5, 6,17,18,...[400]     1            200        23          same
#> 24:   7, 8, 9,10,11,12,...[800]     1            400        24          same
#> 25:   1, 2,15,16,29,30,...[200]     1            100        25          same
#> 26:   3, 4, 5, 6,17,18,...[400]     1            200        26          same
#> 27:  13,14,21,22,23,24,...[800]     1            400        27          same
#>                           train  seed n.train.groups iteration Train_subsets
#>                          <list> <int>          <int>     <int>        <fctr>

So using the K-fold cross-validation, we will do one train/test split for each row of the table above. There is one row for each combination of test subset (A, B, C), train subset (same, other, all), and test fold (1, 2, 3).

We compute and plot the results using the code below,

(reg.learner.list <- list(
  mlr3::LearnerRegrFeatureless$new()))
#> [[1]]
#> 
#> ── <LearnerRegrFeatureless> (regr.featureless): Featureless Regression Learner ─
#> • Model: -
#> • Parameters: robust=FALSE
#> • Packages: mlr3 and stats
#> • Predict Types: [response], se, and quantiles
#> • Feature Types: logical, integer, numeric, character, factor, ordered,
#> POSIXct, and Date
#> • Encapsulation: none (fallback: -)
#> • Properties: featureless, importance, missings, selected_features, and weights
#> • Other settings: use_weights = 'use'
if(requireNamespace("rpart")){
  reg.learner.list$rpart <- mlr3::LearnerRegrRpart$new()
}
#> Loading required namespace: rpart
set.seed(3)
(soak_default_grid <- mlr3::benchmark_grid(
  reg.task,
  reg.learner.list,
  soak_default))
#>      task          learner          resampling
#>    <char>           <char>              <char>
#> 1:    sin regr.featureless same_other_sizes_cv
#> 2:    sin       regr.rpart same_other_sizes_cv
##if(require(future))plan("multisession")
lgr::get_logger("mlr3")$set_threshold("warn")
(soak_default_result <- mlr3::benchmark(
  soak_default_grid, store_models = TRUE))
#> 
#> ── <BenchmarkResult> of 54 rows with 2 resampling run ──────────────────────────
#>  nr task_id       learner_id       resampling_id iters warnings errors
#>   1     sin regr.featureless same_other_sizes_cv    27        0      0
#>   2     sin       regr.rpart same_other_sizes_cv    27        0      0
soak_default_score <- mlr3resampling::score(
  soak_default_result, mlr3::msr("regr.rmse"))
plot(soak_default_score)+my_theme

plot of chunk SameOtherCV

The plot method above shows a multi-panel figure (vertical facet for each algorithm), whereas below we make a custom ggplot with no vertical facets, and color for algorithm.

soak_default_score[, n.train := sapply(train, length)]
soak_default_score[1]
#>    test.subset train.subsets groups test.fold                             test
#>         <char>        <char>  <int>     <int>                           <list>
#> 1:           A           all    700         1  57, 58,141,142,211,212,...[100]
#>                    train  seed n.train.groups iteration Train_subsets
#>                   <list> <int>          <int>     <int>        <fctr>
#> 1: 1,2,3,4,5,6,...[1400]     1            700         1           all
#>                                   uhash    nr           task task_id
#>                                  <char> <int>         <list>  <char>
#> 1: 9ff07e67-1e0f-4d35-a0ba-058b38a9dd81     1 <TaskRegr:sin>     sin
#>                                      learner       learner_id
#>                                       <list>           <char>
#> 1: <LearnerRegrFeatureless:regr.featureless> regr.featureless
#>                      resampling       resampling_id  prediction_test regr.rmse
#>                          <list>              <char>           <list>     <num>
#> 1: <ResamplingSameOtherSizesCV> same_other_sizes_cv <PredictionRegr> 0.9117741
#>      algorithm n.train
#>         <char>   <int>
#> 1: featureless    1400
if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      regr.rmse, train.subsets, color=algorithm),
      shape=1,
      data=soak_default_score)+
    geom_text(aes(
      Inf, train.subsets,
      label=sprintf("n.train=%d ", n.train)),
      size=text.size,
      hjust=1,
      vjust=1.5,
      data=soak_default_score[algorithm=="featureless" & test.fold==1])+
    facet_grid(. ~ test.subset, labeller=label_both, scales="free")+
    scale_x_continuous(
      "Root mean squared prediction error (test set)")
}

plot of chunk unnamed-chunk-6

The figure above shows the effect of train set size on test error.

soak_default_wide <- dcast(
  soak_default_score,
  algorithm + test.subset + train.subsets ~ .,
  list(mean, sd),
  value.var="regr.rmse")
if(require(ggplot2)){
  ggplot()+
    geom_segment(aes(
      regr.rmse_mean+regr.rmse_sd, train.subsets,
      xend=regr.rmse_mean-regr.rmse_sd, yend=train.subsets,
      color=algorithm),
      data=soak_default_wide)+
    geom_point(aes(
      regr.rmse_mean, train.subsets, color=algorithm),
      shape=1,
      data=soak_default_wide)+
    geom_text(aes(
      Inf, train.subsets,
      label=sprintf("n.train=%d ", n.train)),
      size=text.size,
      hjust=1,
      vjust=1.5,
      data=soak_default_score[algorithm=="featureless" & test.fold==1])+
    facet_grid(. ~ test.subset, labeller=label_both, scales="free")+
    scale_x_continuous(
      "Root mean squared prediction error (test set)")
}

plot of chunk unnamed-chunk-7

The figure above shows a test subset in each panel, the train subsets on the y axis, the test error on the x axis, the two different algorithms are shown in two different colors. We can clearly see that

Overall in the plot above, all tends to have less prediction error than same, which suggests that the subsets are similar (and indeed the subsets are i.i.d. in this simulation). Another visualization method is shown below,

plist <- mlr3resampling::pvalue(soak_default_score, digits=3)
plot(plist)+my_theme

plot of chunk unnamed-chunk-8

The visualization above includes P-values (two-sided T-test) for the differences between Same and Other/All.

Below we visualize test error as a function of train size.

if(require(ggplot2)){
  ggplot()+
    theme(legend.position=c(0.85,0.85))+
    geom_line(aes(
      n.train, regr.rmse,
      color=algorithm,
      group=paste(algorithm, test.fold)),
      data=soak_default_score)+
    geom_label(aes(
      n.train, regr.rmse,
      color=algorithm,
      label=train.subsets),
      size=text.size,
      data=soak_default_score)+
    facet_grid(. ~ test.subset, labeller=label_both, scales="free")+
    scale_y_continuous(
      "Root mean squared prediction error (test set)")
}

plot of chunk unnamed-chunk-9

Downsample to see how many train data are required for good accuracy overall

In the previous section we defined a task using the subset role, which means that the different values in that column will be used to define different subsets for training/testing using same/other/all CV. In contrast, below we define a task without the subset role, which means that we will not have separate CV iterations for same/other/all (full data is treated as one subset / train subset is same).

task.no.subset <- mlr3::TaskRegr$new(
  "sin", task.dt, target="y")
task.no.subset$col_roles$group <- "agroup"
task.no.subset$col_roles$stratum <- "random_group"
task.no.subset$col_roles$feature <- "x"
str(task.no.subset$col_roles)
#> List of 10
#>  $ feature        : chr "x"
#>  $ target         : chr "y"
#>  $ name           : chr(0) 
#>  $ order          : chr(0) 
#>  $ stratum        : chr "random_group"
#>  $ group          : chr "agroup"
#>  $ offset         : chr(0) 
#>  $ weights_learner: chr(0) 
#>  $ weights_measure: chr(0) 
#>  $ subset         : chr(0)

Below we define cross-validation, and we set the sizes to 5 so we can see what happens when we have have train sets that are 5 sizes smaller than the full train set size.

five_smaller_sizes <- mlr3resampling::ResamplingSameOtherSizesCV$new()
five_smaller_sizes$param_set$values$sizes <- 5
five_smaller_sizes$clone()$instantiate(task.no.subset)$instance$iteration.dt
#>     test.subset train.subsets groups test.fold                       test
#>          <char>        <char>  <int>     <int>                     <list>
#>  1:        full          same    700         1  7, 8,11,12,15,16,...[700]
#>  2:        full          same    700         1  7, 8,11,12,15,16,...[700]
#>  3:        full          same    700         1  7, 8,11,12,15,16,...[700]
#>  4:        full          same    700         1  7, 8,11,12,15,16,...[700]
#>  5:        full          same    700         1  7, 8,11,12,15,16,...[700]
#>  6:        full          same    700         1  7, 8,11,12,15,16,...[700]
#>  7:        full          same    700         2  1, 2, 5, 6,17,18,...[700]
#>  8:        full          same    700         2  1, 2, 5, 6,17,18,...[700]
#>  9:        full          same    700         2  1, 2, 5, 6,17,18,...[700]
#> 10:        full          same    700         2  1, 2, 5, 6,17,18,...[700]
#> 11:        full          same    700         2  1, 2, 5, 6,17,18,...[700]
#> 12:        full          same    700         2  1, 2, 5, 6,17,18,...[700]
#> 13:        full          same    700         3  3, 4, 9,10,13,14,...[700]
#> 14:        full          same    700         3  3, 4, 9,10,13,14,...[700]
#> 15:        full          same    700         3  3, 4, 9,10,13,14,...[700]
#> 16:        full          same    700         3  3, 4, 9,10,13,14,...[700]
#> 17:        full          same    700         3  3, 4, 9,10,13,14,...[700]
#> 18:        full          same    700         3  3, 4, 9,10,13,14,...[700]
#>                               train  seed n.train.groups iteration
#>                              <list> <int>          <int>     <int>
#>  1:  25, 26,177,178,229,230,...[42]     1             21         1
#>  2:  23, 24, 25, 26,177,178,...[84]     1             43         2
#>  3:       9,10,17,18,23,24,...[170]     1             87         3
#>  4:       1, 2, 9,10,17,18,...[350]     1            175         4
#>  5:       1, 2, 9,10,17,18,...[700]     1            350         5
#>  6:           1,2,3,4,5,6,...[1400]     1            700         6
#>  7:  21, 22, 77, 78,167,168,...[42]     1             21         7
#>  8:       21,22,61,62,77,78,...[84]     1             43         8
#>  9:      21,22,57,58,61,62,...[170]     1             87         9
#> 10:      21,22,57,58,61,62,...[350]     1            175        10
#> 11:      21,22,23,24,33,34,...[700]     1            350        11
#> 12:      3, 4, 7, 8, 9,10,...[1400]     1            700        12
#> 13:  15, 16,169,170,295,296,...[42]     1             21        13
#> 14:  15, 16,169,170,253,254,...[84]     1             43        14
#> 15:      15,16,17,18,33,34,...[170]     1             87        15
#> 16:       1, 2,15,16,17,18,...[350]     1            175        16
#> 17:       1, 2,15,16,17,18,...[700]     1            350        17
#> 18:           1,2,5,6,7,8,...[1400]     1            700        18
#>     Train_subsets
#>            <fctr>
#>  1:          same
#>  2:          same
#>  3:          same
#>  4:          same
#>  5:          same
#>  6:          same
#>  7:          same
#>  8:          same
#>  9:          same
#> 10:          same
#> 11:          same
#> 12:          same
#> 13:          same
#> 14:          same
#> 15:          same
#> 16:          same
#> 17:          same
#> 18:          same

So using the K-fold cross-validation, we will do one train/test split for each row of the table above. There is one row for each combination of n.train.groups (full train set size + 5 smaller sizes), and test fold (1, 2, 3).

We compute and plot the results using the code below,

(reg.learner.list <- list(
  mlr3::LearnerRegrFeatureless$new()))
#> [[1]]
#> 
#> ── <LearnerRegrFeatureless> (regr.featureless): Featureless Regression Learner ─
#> • Model: -
#> • Parameters: robust=FALSE
#> • Packages: mlr3 and stats
#> • Predict Types: [response], se, and quantiles
#> • Feature Types: logical, integer, numeric, character, factor, ordered,
#> POSIXct, and Date
#> • Encapsulation: none (fallback: -)
#> • Properties: featureless, importance, missings, selected_features, and weights
#> • Other settings: use_weights = 'use'
if(requireNamespace("rpart")){
  reg.learner.list$rpart <- mlr3::LearnerRegrRpart$new()
}
set.seed(1)
(five_smaller_sizes_grid <- mlr3::benchmark_grid(
  task.no.subset,
  reg.learner.list,
  five_smaller_sizes))
#>      task          learner          resampling
#>    <char>           <char>              <char>
#> 1:    sin regr.featureless same_other_sizes_cv
#> 2:    sin       regr.rpart same_other_sizes_cv
##if(require(future))plan("multisession")
lgr::get_logger("mlr3")$set_threshold("warn")
(five_smaller_sizes_result <- mlr3::benchmark(
  five_smaller_sizes_grid, store_models = TRUE))
#> 
#> ── <BenchmarkResult> of 36 rows with 2 resampling run ──────────────────────────
#>  nr task_id       learner_id       resampling_id iters warnings errors
#>   1     sin regr.featureless same_other_sizes_cv    18        0      0
#>   2     sin       regr.rpart same_other_sizes_cv    18        0      0
five_smaller_sizes_score <- mlr3resampling::score(
  five_smaller_sizes_result, mlr3::msr("regr.rmse")
)[, n.train := sapply(train, length)]
five_smaller_sizes_score[1]
#>    test.subset train.subsets groups test.fold                       test
#>         <char>        <char>  <int>     <int>                     <list>
#> 1:        full          same    700         1  5, 6,17,18,21,22,...[700]
#>                              train  seed n.train.groups iteration Train_subsets
#>                             <list> <int>          <int>     <int>        <fctr>
#> 1: 217,218,269,270,291,292,...[42]     1             21         1          same
#>                                   uhash    nr           task task_id
#>                                  <char> <int>         <list>  <char>
#> 1: 01e57f1f-17f4-47d1-97b8-445fe261a36b     1 <TaskRegr:sin>     sin
#>                                      learner       learner_id
#>                                       <list>           <char>
#> 1: <LearnerRegrFeatureless:regr.featureless> regr.featureless
#>                      resampling       resampling_id  prediction_test regr.rmse
#>                          <list>              <char>           <list>     <num>
#> 1: <ResamplingSameOtherSizesCV> same_other_sizes_cv <PredictionRegr> 0.8535652
#>      algorithm n.train
#>         <char>   <int>
#> 1: featureless      42
if(require(ggplot2)){
  ggplot()+
    geom_line(aes(
      n.train, regr.rmse,
      color=algorithm,
      group=paste(algorithm, test.fold)),
      data=five_smaller_sizes_score)+
    geom_point(aes(
      n.train, regr.rmse,
      color=algorithm),
      data=five_smaller_sizes_score)+
    facet_grid(. ~ test.subset, labeller=label_both, scales="free")+
    scale_x_log10(
      "Number of train rows",
      breaks=unique(five_smaller_sizes_score$n.train))+
    scale_y_continuous(
      "Root mean squared prediction error (test set)")
}

plot of chunk unnamed-chunk-12

From the plot above, it looks like about 700 rows is enough to get minimal test error, using the rpart learner.

Reproducibility

After the benchmark is done, how can we reproduce it?

Reproducing K-fold CV for largest train size

First we create a new task with a fold column containing the fold IDs used in the previous benchmark.

(five_smaller_instantiated <- five_smaller_sizes_result$resamplings$resampling[[1]])
#> 
#> ── <ResamplingSameOtherSizesCV> : Compare Same/Other and Sizes Cross-Validation 
#> • Iterations: 18
#> • Instantiated: TRUE
#> • Parameters: folds=3, seeds=1, ratio=0.5, sizes=5, ignore_subset=FALSE,
#> subsets=SOA
(task.dt.fold <- five_smaller_instantiated$instance$fold.dt[
, data.table(fold, task.dt)])
#>        fold         x           y agroup random_group
#>       <int>     <num>       <num>  <int>       <char>
#>    1:     2 -44.11648 -0.40781530      1            A
#>    2:     2  28.33237 -0.08520601      1            A
#>    3:     3  10.26569 -1.23266284      2            B
#>    4:     3 -46.47273 -1.36225125      2            B
#>    5:     1  62.13751 -1.33779346      3            B
#>   ---                                                
#> 2096:     1  60.83765 -0.10678010   1048            C
#> 2097:     1  55.71469 -0.92403513   1049            C
#> 2098:     1  14.31045  1.04519820   1049            C
#> 2099:     1  27.18008  1.67815828   1050            C
#> 2100:     1  23.67202 -0.26881102   1050            C
task.with.fold <- mlr3::TaskRegr$new(
  "sin", task.dt.fold, target="y")
task.with.fold$col_roles$group <- "agroup"
task.with.fold$col_roles$stratum <- "random_group"
task.with.fold$col_roles$feature <- "x"

Then we run a new benchmark with custom CV.

fold_col_cv <- mlr3::ResamplingCustomCV$new()
fold_col_cv$instantiate(task.with.fold, col="fold")
(fold_col_grid <- mlr3::benchmark_grid(
  task.no.subset, #works because same number of rows!
  reg.learner.list,
  fold_col_cv))
#>      task          learner resampling
#>    <char>           <char>     <char>
#> 1:    sin regr.featureless  custom_cv
#> 2:    sin       regr.rpart  custom_cv
lgr::get_logger("mlr3")$set_threshold("warn")
(fold_col_result <- mlr3::benchmark(
  fold_col_grid, store_models = TRUE))
#> 
#> ── <BenchmarkResult> of 6 rows with 2 resampling run ───────────────────────────
#>  nr task_id       learner_id resampling_id iters warnings errors
#>   1     sin regr.featureless     custom_cv     3        0      0
#>   2     sin       regr.rpart     custom_cv     3        0      0

The code below compares the original benchmark from the previous section to the new benchmark computed in this section.

rep_score_list <- list(
  reproduced=fold_col_result$score(mlr3::msr("regr.rmse"))[, test.fold := iteration],
  original=five_smaller_sizes_score[n.train==max(n.train)])
rep_score_dt <- data.table(data_source=names(rep_score_list))[
, rep_score_list[[data_source]][, .(learner_id, test.fold, regr.rmse)]
, by=data_source]
dcast(rep_score_dt, learner_id + data_source ~ test.fold, value.var="regr.rmse")
#> Key: <learner_id, data_source>
#>          learner_id data_source         1         2         3
#>              <char>      <char>     <num>     <num>     <num>
#> 1: regr.featureless    original 0.8416002 0.8518081 0.8897127
#> 2: regr.featureless  reproduced 0.8416002 0.8518081 0.8897127
#> 3:       regr.rpart    original 0.7669863 0.7177109 0.7021508
#> 4:       regr.rpart  reproduced 0.7669863 0.7177109 0.7021508

The output above shows that the reproduced error rates are consistent with the original error rates.

Reproducing each split

Each train/test split in mlr3 is called an iteration. Below we use a custom resampling to reproduce the iterations created by five_smaller_sizes in the previous section.

custom_splits <- mlr3::ResamplingCustom$new()
five_smaller_instantiated$instance$iteration.dt[
, custom_splits$instantiate(task.no.subset, train, test)]
#> 
#> ── <ResamplingCustom> : Custom Splits ──────────────────────────────────────────
#> • Iterations: 18
#> • Instantiated: TRUE
#> • Parameters: list()
(custom_splits_grid <- mlr3::benchmark_grid(
  task.no.subset,
  reg.learner.list,
  custom_splits))
#>      task          learner resampling
#>    <char>           <char>     <char>
#> 1:    sin regr.featureless     custom
#> 2:    sin       regr.rpart     custom
lgr::get_logger("mlr3")$set_threshold("warn")
(custom_split_result <- mlr3::benchmark(
  custom_splits_grid, store_models = TRUE))
#> 
#> ── <BenchmarkResult> of 36 rows with 2 resampling run ──────────────────────────
#>  nr task_id       learner_id resampling_id iters warnings errors
#>   1     sin regr.featureless        custom    18        0      0
#>   2     sin       regr.rpart        custom    18        0      0

The code below compares the reproduced error rates computed in this section with the original error rates computed in the previous section.

rep_custom_list <- list(
  reproduced=custom_split_result$score(mlr3::msr("regr.rmse")),
  original=five_smaller_sizes_score)
rep_custom_dt <- data.table(data_source=names(rep_custom_list))[
, rep_custom_list[[data_source]][, .(learner_id, iteration, regr.rmse)]
, by=data_source]
dcast(
  rep_custom_dt,
  learner_id + iteration ~ data_source,
  value.var="regr.rmse"
)[, diff := reproduced-original][]
#> Key: <learner_id, iteration>
#>           learner_id iteration  original reproduced  diff
#>               <char>     <int>     <num>      <num> <num>
#>  1: regr.featureless         1 0.8535652  0.8535652     0
#>  2: regr.featureless         2 0.8434235  0.8434235     0
#>  3: regr.featureless         3 0.8435511  0.8435511     0
#>  4: regr.featureless         4 0.8424592  0.8424592     0
#>  5: regr.featureless         5 0.8417339  0.8417339     0
#>  6: regr.featureless         6 0.8416002  0.8416002     0
#>  7: regr.featureless         7 0.8659285  0.8659285     0
#>  8: regr.featureless         8 0.8531501  0.8531501     0
#>  9: regr.featureless         9 0.8516031  0.8516031     0
#> 10: regr.featureless        10 0.8515782  0.8515782     0
#> 11: regr.featureless        11 0.8528633  0.8528633     0
#> 12: regr.featureless        12 0.8518081  0.8518081     0
#> 13: regr.featureless        13 0.9029310  0.9029310     0
#> 14: regr.featureless        14 0.8889091  0.8889091     0
#> 15: regr.featureless        15 0.8888373  0.8888373     0
#> 16: regr.featureless        16 0.8901708  0.8901708     0
#> 17: regr.featureless        17 0.8905298  0.8905298     0
#> 18: regr.featureless        18 0.8897127  0.8897127     0
#> 19:       regr.rpart         1 0.8601782  0.8601782     0
#> 20:       regr.rpart         2 0.9032483  0.9032483     0
#> 21:       regr.rpart         3 0.8548039  0.8548039     0
#> 22:       regr.rpart         4 0.7586685  0.7586685     0
#> 23:       regr.rpart         5 0.7934613  0.7934613     0
#> 24:       regr.rpart         6 0.7669863  0.7669863     0
#> 25:       regr.rpart         7 0.9381619  0.9381619     0
#> 26:       regr.rpart         8 0.9009195  0.9009195     0
#> 27:       regr.rpart         9 0.8973343  0.8973343     0
#> 28:       regr.rpart        10 0.8060342  0.8060342     0
#> 29:       regr.rpart        11 0.6751932  0.6751932     0
#> 30:       regr.rpart        12 0.7177109  0.7177109     0
#> 31:       regr.rpart        13 0.9881721  0.9881721     0
#> 32:       regr.rpart        14 0.9225649  0.9225649     0
#> 33:       regr.rpart        15 0.9039789  0.9039789     0
#> 34:       regr.rpart        16 0.8120602  0.8120602     0
#> 35:       regr.rpart        17 0.7075818  0.7075818     0
#> 36:       regr.rpart        18 0.7021508  0.7021508     0
#>           learner_id iteration  original reproduced  diff
#>               <char>     <int>     <num>      <num> <num>

The output above shows a difference of zero, indicating that the error rates are consistent.

Downsample to sizes of other sets

To investigate down-sampling in the context of training on same/other/all subsets, we first generate some new data (smaller than previously).

N <- 600
abs.x <- 20
set.seed(1)
x.vec <- sort(runif(N, -abs.x, abs.x))
str(x.vec)
#>  num [1:600] -19.9 -19.9 -19.7 -19.6 -19.6 ...
library(data.table)
(task.dt <- data.table(
  x=x.vec,
  y = sin(x.vec)+rnorm(N,sd=0.5)))
#>              x          y
#>          <num>      <num>
#>   1: -19.92653 -0.4336887
#>   2: -19.92269 -1.4023484
#>   3: -19.67486  0.2509134
#>   4: -19.55856 -0.8428921
#>   5: -19.55402  0.1794473
#>  ---                     
#> 596:  19.70736  0.7497818
#> 597:  19.74997  0.3178435
#> 598:  19.75656  1.3950030
#> 599:  19.83862 -0.2086586
#> 600:  19.84309  0.5748863
if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      x, y),
      shape=1,
      data=task.dt)+
    coord_equal()
}

plot of chunk simulationShort

atomic.subset.size <- 2
task.dt[, agroup := rep(seq(1, N/atomic.subset.size), each=atomic.subset.size)][]
#>              x          y agroup
#>          <num>      <num>  <int>
#>   1: -19.92653 -0.4336887      1
#>   2: -19.92269 -1.4023484      1
#>   3: -19.67486  0.2509134      2
#>   4: -19.55856 -0.8428921      2
#>   5: -19.55402  0.1794473      3
#>  ---                            
#> 596:  19.70736  0.7497818    298
#> 597:  19.74997  0.3178435    299
#> 598:  19.75656  1.3950030    299
#> 599:  19.83862 -0.2086586    300
#> 600:  19.84309  0.5748863    300
task.dt[, random_subset := rep(
  rep(c("A","B","B","B"), each=atomic.subset.size),
  l=.N
)][]
#>              x          y agroup random_subset
#>          <num>      <num>  <int>        <char>
#>   1: -19.92653 -0.4336887      1             A
#>   2: -19.92269 -1.4023484      1             A
#>   3: -19.67486  0.2509134      2             B
#>   4: -19.55856 -0.8428921      2             B
#>   5: -19.55402  0.1794473      3             B
#>  ---                                          
#> 596:  19.70736  0.7497818    298             B
#> 597:  19.74997  0.3178435    299             B
#> 598:  19.75656  1.3950030    299             B
#> 599:  19.83862 -0.2086586    300             B
#> 600:  19.84309  0.5748863    300             B
table(subset.tab <- task.dt$random_subset)
#> 
#>   A   B 
#> 150 450
reg.task <- mlr3::TaskRegr$new(
  "sin", task.dt, target="y")
reg.task$col_roles$subset <- "random_subset"
reg.task$col_roles$group <- "agroup"
reg.task$col_roles$stratum <- "random_subset"
reg.task$col_roles$feature <- "x"
soak_sizes <- mlr3resampling::ResamplingSameOtherSizesCV$new()

In the previous section we analyzed prediction accuracy of same/other/all, which corresponds to keeping sizes parameter at default of -1. The main difference in this section is that we change sizes to 0, which means to down-sample same/other/all, so we can see if there is an effect for sample size (there should be for iid problems with intermediate difficulty). We set sizes to 0 in the next line:

soak_sizes$param_set$values$sizes <- 0
soak_sizes$instantiate(reg.task)
soak_sizes$instance$it
#>     test.subset train.subsets groups test.fold                       test
#>          <char>        <char>  <int>     <int>                     <list>
#>  1:           A           all    200         1   1, 2,49,50,57,58,...[50]
#>  2:           A           all    200         1   1, 2,49,50,57,58,...[50]
#>  3:           A           all    200         1   1, 2,49,50,57,58,...[50]
#>  4:           B           all    200         1 19,20,31,32,37,38,...[150]
#>  5:           B           all    200         1 19,20,31,32,37,38,...[150]
#>  6:           B           all    200         1 19,20,31,32,37,38,...[150]
#>  7:           A           all    200         2  17,18,41,42,89,90,...[50]
#>  8:           A           all    200         2  17,18,41,42,89,90,...[50]
#>  9:           A           all    200         2  17,18,41,42,89,90,...[50]
#> 10:           B           all    200         2       3,4,5,6,7,8,...[150]
#> 11:           B           all    200         2       3,4,5,6,7,8,...[150]
#> 12:           B           all    200         2       3,4,5,6,7,8,...[150]
#> 13:           A           all    200         3   9,10,25,26,33,34,...[50]
#> 14:           A           all    200         3   9,10,25,26,33,34,...[50]
#> 15:           A           all    200         3   9,10,25,26,33,34,...[50]
#> 16:           B           all    200         3 15,16,21,22,23,24,...[150]
#> 17:           B           all    200         3 15,16,21,22,23,24,...[150]
#> 18:           B           all    200         3 15,16,21,22,23,24,...[150]
#> 19:           A         other    150         1   1, 2,49,50,57,58,...[50]
#> 20:           A         other    150         1   1, 2,49,50,57,58,...[50]
#> 21:           B         other     50         1 19,20,31,32,37,38,...[150]
#> 22:           A         other    150         2  17,18,41,42,89,90,...[50]
#> 23:           A         other    150         2  17,18,41,42,89,90,...[50]
#> 24:           B         other     50         2       3,4,5,6,7,8,...[150]
#> 25:           A         other    150         3   9,10,25,26,33,34,...[50]
#> 26:           A         other    150         3   9,10,25,26,33,34,...[50]
#> 27:           B         other     50         3 15,16,21,22,23,24,...[150]
#> 28:           A          same     50         1   1, 2,49,50,57,58,...[50]
#> 29:           B          same    150         1 19,20,31,32,37,38,...[150]
#> 30:           B          same    150         1 19,20,31,32,37,38,...[150]
#> 31:           A          same     50         2  17,18,41,42,89,90,...[50]
#> 32:           B          same    150         2       3,4,5,6,7,8,...[150]
#> 33:           B          same    150         2       3,4,5,6,7,8,...[150]
#> 34:           A          same     50         3   9,10,25,26,33,34,...[50]
#> 35:           B          same    150         3 15,16,21,22,23,24,...[150]
#> 36:           B          same    150         3 15,16,21,22,23,24,...[150]
#>     test.subset train.subsets groups test.fold                       test
#>          <char>        <char>  <int>     <int>                     <list>
#>                          train  seed n.train.groups iteration Train_subsets
#>                         <list> <int>          <int>     <int>        <fctr>
#>  1:   5, 6, 9,10,15,16,...[98]     1             50         1           all
#>  2:       3,4,5,6,7,8,...[298]     1            150         2           all
#>  3:       3,4,5,6,7,8,...[400]     1            200         3           all
#>  4:   3, 4, 7, 8,15,16,...[98]     1             50         4           all
#>  5:       3,4,5,6,7,8,...[298]     1            150         5           all
#>  6:       3,4,5,6,7,8,...[400]     1            200         6           all
#>  7:   1, 2,35,36,39,40,...[98]     1             50         7           all
#>  8:  1, 2, 9,10,19,20,...[298]     1            150         8           all
#>  9:  1, 2, 9,10,15,16,...[400]     1            200         9           all
#> 10:  19,20,63,64,73,74,...[98]     1             50        10           all
#> 11:  1, 2, 9,10,15,16,...[298]     1            150        11           all
#> 12:  1, 2, 9,10,15,16,...[400]     1            200        12           all
#> 13:  29,30,37,38,49,50,...[98]     1             50        13           all
#> 14:  5, 6,11,12,13,14,...[298]     1            150        14           all
#> 15:       1,2,3,4,5,6,...[400]     1            200        15           all
#> 16:  13,14,29,30,49,50,...[98]     1             50        16           all
#> 17:       1,2,3,4,5,6,...[298]     1            150        17           all
#> 18:       1,2,3,4,5,6,...[400]     1            200        18           all
#> 19: 15,16,21,22,55,56,...[100]     1             50        19         other
#> 20:       3,4,5,6,7,8,...[300]     1            150        20         other
#> 21:  9,10,17,18,25,26,...[100]     1             50        21         other
#> 22: 15,16,19,20,23,24,...[100]     1             50        22         other
#> 23: 15,16,19,20,21,22,...[300]     1            150        23         other
#> 24:  1, 2, 9,10,25,26,...[100]     1             50        24         other
#> 25: 11,12,19,20,27,28,...[100]     1             50        25         other
#> 26:       3,4,5,6,7,8,...[300]     1            150        26         other
#> 27:  1, 2,17,18,41,42,...[100]     1             50        27         other
#> 28:  9,10,17,18,25,26,...[100]     1             50        28          same
#> 29: 59,60,63,64,75,76,...[100]     1             50        29          same
#> 30:       3,4,5,6,7,8,...[300]     1            150        30          same
#> 31:  1, 2, 9,10,25,26,...[100]     1             50        31          same
#> 32: 23,24,37,38,51,52,...[100]     1             50        32          same
#> 33: 15,16,19,20,21,22,...[300]     1            150        33          same
#> 34:  1, 2,17,18,41,42,...[100]     1             50        34          same
#> 35: 11,12,19,20,45,46,...[100]     1             50        35          same
#> 36:       3,4,5,6,7,8,...[300]     1            150        36          same
#>                          train  seed n.train.groups iteration Train_subsets
#>                         <list> <int>          <int>     <int>        <fctr>
(reg.learner.list <- list(
  mlr3::LearnerRegrFeatureless$new()))
#> [[1]]
#> 
#> ── <LearnerRegrFeatureless> (regr.featureless): Featureless Regression Learner ─
#> • Model: -
#> • Parameters: robust=FALSE
#> • Packages: mlr3 and stats
#> • Predict Types: [response], se, and quantiles
#> • Feature Types: logical, integer, numeric, character, factor, ordered,
#> POSIXct, and Date
#> • Encapsulation: none (fallback: -)
#> • Properties: featureless, importance, missings, selected_features, and weights
#> • Other settings: use_weights = 'use'
if(requireNamespace("rpart")){
  reg.learner.list$rpart <- mlr3::LearnerRegrRpart$new()
}
(soak_sizes_grid <- mlr3::benchmark_grid(
  reg.task,
  reg.learner.list,
  soak_sizes))
#>      task          learner          resampling
#>    <char>           <char>              <char>
#> 1:    sin regr.featureless same_other_sizes_cv
#> 2:    sin       regr.rpart same_other_sizes_cv
##if(require(future))plan("multisession")
lgr::get_logger("mlr3")$set_threshold("warn")
(soak_sizes_result <- mlr3::benchmark(
  soak_sizes_grid, store_models = TRUE))
#> 
#> ── <BenchmarkResult> of 72 rows with 2 resampling run ──────────────────────────
#>  nr task_id       learner_id       resampling_id iters warnings errors
#>   1     sin regr.featureless same_other_sizes_cv    36        0      0
#>   2     sin       regr.rpart same_other_sizes_cv    36        0      0
soak_sizes_score <- mlr3resampling::score(
  soak_sizes_result, mlr3::msr("regr.rmse"))
soak_sizes_score[1]
#>    test.subset train.subsets groups test.fold                      test
#>         <char>        <char>  <int>     <int>                    <list>
#> 1:           A           all    200         1  1, 2,49,50,57,58,...[50]
#>                        train  seed n.train.groups iteration Train_subsets
#>                       <list> <int>          <int>     <int>        <fctr>
#> 1:  5, 6, 9,10,15,16,...[98]     1             50         1           all
#>                                   uhash    nr           task task_id
#>                                  <char> <int>         <list>  <char>
#> 1: 246151e7-104c-4012-b12a-ca83bf1f6c59     1 <TaskRegr:sin>     sin
#>                                      learner       learner_id
#>                                       <list>           <char>
#> 1: <LearnerRegrFeatureless:regr.featureless> regr.featureless
#>                      resampling       resampling_id  prediction_test regr.rmse
#>                          <list>              <char>           <list>     <num>
#> 1: <ResamplingSameOtherSizesCV> same_other_sizes_cv <PredictionRegr> 0.7625015
#>      algorithm
#>         <char>
#> 1: featureless

The plot below shows the same results (no down-sampling) as if we did sizes=-1 (like in the previous section.

if(require(ggplot2)){
ggplot()+
  geom_point(aes(
    regr.rmse, train.subsets, color=algorithm),
    shape=1,
    data=soak_sizes_score[groups==n.train.groups])+
  facet_grid(. ~ test.subset, labeller=label_both)
}

plot of chunk unnamed-chunk-19

The plots below compare all six train subsets (including three down-sampled), and it it is clear there is an effect for sample size.

soak_sizes_score[, subset.N := paste(train.subsets, n.train.groups)]
(levs <- soak_sizes_score[order(train.subsets, n.train.groups), unique(subset.N)])
#> [1] "all 50"    "all 150"   "all 200"   "other 50"  "other 150" "same 50"  
#> [7] "same 150"
soak_sizes_score[, subset.N.fac := factor(subset.N, levs)]
if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      regr.rmse, subset.N.fac, color=algorithm),
      shape=1,
      data=soak_sizes_score)+
    facet_wrap("test.subset", labeller=label_both, scales="free", nrow=1)
}

plot of chunk unnamed-chunk-20

(levs <- soak_sizes_score[order(n.train.groups, train.subsets), unique(subset.N)])
#> [1] "all 50"    "other 50"  "same 50"   "all 150"   "other 150" "same 150" 
#> [7] "all 200"
soak_sizes_score[, N.subset.fac := factor(subset.N, levs)]
if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      regr.rmse, N.subset.fac, color=algorithm),
      shape=1,
      data=soak_sizes_score)+
    facet_wrap("test.subset", labeller=label_both, scales="free", nrow=1)
}

plot of chunk unnamed-chunk-20

Another way to view the effect of sample size is to plot the test/prediction error, as a function of number of train data, as in the plots below.

if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      n.train.groups, regr.rmse,
      color=train.subsets),
      shape=1,
      data=soak_sizes_score)+
    geom_line(aes(
      n.train.groups, regr.rmse,
      group=paste(train.subsets, seed, algorithm),
      linetype=algorithm,
      color=train.subsets),
      data=soak_sizes_score)+
    facet_grid(test.fold ~ test.subset, labeller=label_both)
}

plot of chunk unnamed-chunk-21

rpart.score <- soak_sizes_score[algorithm=="rpart" & train.subsets != "other"]
if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      n.train.groups, regr.rmse,
      color=train.subsets),
      shape=1,
      data=rpart.score)+
    geom_line(aes(
      n.train.groups, regr.rmse,
      group=paste(train.subsets, seed, algorithm),
      color=train.subsets),
      data=rpart.score)+
    facet_grid(test.fold ~ test.subset, labeller=label_both)
}

plot of chunk unnamed-chunk-21

Use with auto_tuner on a task with stratification and grouping

In this section we show how ResamplingSameOtherSizesCV can be used on a task with stratification and grouping, for hyper-parameter learning. First we recall the previously defined task and evaluation CV.

str(reg.task$col_roles)
#> List of 10
#>  $ feature        : chr "x"
#>  $ target         : chr "y"
#>  $ name           : chr(0) 
#>  $ order          : chr(0) 
#>  $ stratum        : chr "random_subset"
#>  $ group          : chr "agroup"
#>  $ offset         : chr(0) 
#>  $ weights_learner: chr(0) 
#>  $ weights_measure: chr(0) 
#>  $ subset         : chr "random_subset"

We see in the output above that the task has column roles for both stratum and group, which normally errors when used with ResamplingCV:

mlr3::ResamplingCV$new()$instantiate(reg.task)
#> Error: Cannot combine stratification with grouping

Below we show how ResamplingSameOtherSizesCV can be used instead:

ignore.cv <- mlr3resampling::ResamplingSameOtherSizesCV$new()
ignore.cv$param_set$values$ignore_subset <- TRUE
ignore.cv$instantiate(reg.task)
ignore.cv$instance$iteration.dt
#>    test.subset train.subsets groups test.fold                       test
#>         <char>        <char>  <int>     <int>                     <list>
#> 1:        full          same    200         1  5, 6, 7, 8, 9,10,...[200]
#> 2:        full          same    200         2  3, 4,11,12,13,14,...[200]
#> 3:        full          same    200         3  1, 2,25,26,31,32,...[200]
#>                         train  seed n.train.groups iteration Train_subsets
#>                        <list> <int>          <int>     <int>        <fctr>
#> 1:  1, 2, 3, 4,11,12,...[400]     1            200         1          same
#> 2:       1,2,5,6,7,8,...[400]     1            200         2          same
#> 3:       3,4,5,6,7,8,...[400]     1            200         3          same

To use the above CV object with a learning algorithm in a benchmark experiment, we need to use it as the resampling argument to auto_tuner, as in the code below,

do_benchmark <- function(subtrain.valid.cv){
  reg.learner.list <- list(
    mlr3::LearnerRegrFeatureless$new())
  if(requireNamespace("rpart")){
    reg.learner.list$rpart <- mlr3::LearnerRegrRpart$new()
    if(requireNamespace("mlr3tuning")){
      rpart.learner <- mlr3::LearnerRegrRpart$new()
      ##mlr3tuningspaces::lts(rpart.learner)$param_set$values
      rpart.learner$param_set$values$cp <- paradox::to_tune(1e-4, 0.1, log=TRUE)
      reg.learner.list$rpart.tuned <- mlr3tuning::auto_tuner(
        tuner = mlr3tuning::tnr("grid_search"), #mlr3tuning::TunerBatchGridSearch$new()
        learner = rpart.learner,
        resampling = subtrain.valid.cv,
        measure = mlr3::msr("regr.rmse"))
    }
  }
  soak_sizes_grid <- mlr3::benchmark_grid(
    reg.task,
    reg.learner.list,
    soak_sizes)
  lgr::get_logger("bbotk")$set_threshold("warn")
  soak_sizes_result <- mlr3::benchmark(
    soak_sizes_grid, store_models = TRUE)
}
do_benchmark(mlr3::ResamplingCV$new())
#> Loading required namespace: mlr3tuning
#> Warning: Caught Mlr3Error. Canceling all iterations ...
#> Error: Cannot combine stratification with grouping

The error above is because ResamplingCV does not support stratification and grouping. To fix that, we can use the code below:

ignore.cv <- mlr3resampling::ResamplingSameOtherSizesCV$new()
ignore.cv$param_set$values$ignore_subset <- TRUE
(same.other.result <- do_benchmark(ignore.cv))
#> 
#> ── <BenchmarkResult> of 108 rows with 3 resampling run ─────────────────────────
#>  nr task_id       learner_id       resampling_id iters warnings errors
#>   1     sin regr.featureless same_other_sizes_cv    36        0      0
#>   2     sin       regr.rpart same_other_sizes_cv    36        0      0
#>   3     sin regr.rpart.tuned same_other_sizes_cv    36        0      0

The output above shows that the benchmark worked. The code below plots the results.

same.other.score <- mlr3resampling::score(
  same.other.result, mlr3::msr("regr.rmse"))
same.other.score[1]
#>    test.subset train.subsets groups test.fold                      test
#>         <char>        <char>  <int>     <int>                    <list>
#> 1:           A           all    200         1  1, 2,49,50,57,58,...[50]
#>                        train  seed n.train.groups iteration Train_subsets
#>                       <list> <int>          <int>     <int>        <fctr>
#> 1:  5, 6, 9,10,15,16,...[98]     1             50         1           all
#>                                   uhash    nr           task task_id
#>                                  <char> <int>         <list>  <char>
#> 1: cf567425-820e-438e-946b-7e70e8d0e3d2     1 <TaskRegr:sin>     sin
#>                                      learner       learner_id
#>                                       <list>           <char>
#> 1: <LearnerRegrFeatureless:regr.featureless> regr.featureless
#>                      resampling       resampling_id  prediction_test regr.rmse
#>                          <list>              <char>           <list>     <num>
#> 1: <ResamplingSameOtherSizesCV> same_other_sizes_cv <PredictionRegr> 0.7625015
#>      algorithm
#>         <char>
#> 1: featureless
same.other.wide <- dcast(
  same.other.score,
  algorithm + test.subset + train.subsets ~ .,
  list(mean, sd),
  value.var="regr.rmse")
if(require(ggplot2)){
  ggplot()+
    geom_segment(aes(
      regr.rmse_mean+regr.rmse_sd, train.subsets,
      xend=regr.rmse_mean-regr.rmse_sd, yend=train.subsets),
      data=same.other.wide)+
    geom_point(aes(
      regr.rmse_mean, train.subsets),
      shape=1,
      data=same.other.wide)+
    facet_grid(algorithm ~ test.subset, labeller=label_both)
}

plot of chunk unnamed-chunk-28

The plot above has different panels for rpart (without tuning) and tuned (rpart with tuning of cp).

Conclusions

mlr3resampling::ResamplingSameOtherSizesCV can be used for model evaluation (train/test split):

It can also be used for model training (subtrain/validation split):

Arizona trees data

The goal of this section is explain the differences between various column roles:

What is a group?

Below we load the data set.

data(AZtrees,package="mlr3resampling")
library(data.table)
AZdt <- data.table(AZtrees)
AZdt[1]
#>       xcoord   ycoord region3 region4 polygon        y SAMPLE_1 SAMPLE_2
#>        <num>    <num>  <char>  <char>  <fctr>   <fctr>    <int>    <int>
#> 1: -111.6643 35.23736      NE      NE       1 Not tree     3331     3919
#>    SAMPLE_3 SAMPLE_4 SAMPLE_5 SAMPLE_6 SAMPLE_7 SAMPLE_8 SAMPLE_9 SAMPLE_10
#>       <int>    <int>    <int>    <int>    <int>    <int>    <int>     <int>
#> 1:     3957     4514     4700     4607     4420     4494     4139      3906
#>    SAMPLE_11 SAMPLE_12 SAMPLE_13 SAMPLE_14 SAMPLE_15 SAMPLE_16 SAMPLE_17
#>        <int>     <int>     <int>     <int>     <int>     <int>     <int>
#> 1:        14       -40       -71       125        21        25        10
#>    SAMPLE_18 SAMPLE_19 SAMPLE_20 SAMPLE_21
#>        <int>     <int>     <int>     <int>
#> 1:      -263      -324      -362       370

Above we see one row of data. Below we see a scatterplot of the data:

x.center <- -111.72
y.center <- 35.272
rect.size <- 0.01/2
x.min.max <- x.center+c(-1, 1)*rect.size
y.min.max <- y.center+c(-1, 1)*rect.size
rect.dt <- data.table(
  xmin=x.min.max[1], xmax=x.min.max[2],
  ymin=y.min.max[1], ymax=y.min.max[2])
if(require(ggplot2)){
  tree.fill.scale <- scale_fill_manual(
    values=c(Tree="black", "Not tree"="white"))
  ggplot()+
    tree.fill.scale+
    geom_rect(aes(
      xmin=xmin, xmax=xmax, ymin=ymin,ymax=ymax),
      data=rect.dt,
      fill="red",
      linewidth=3,
      color="red")+
    geom_point(aes(
      xcoord, ycoord, fill=y),
      shape=21,
      data=AZdt)+
    coord_equal()
}

plot of chunk unnamed-chunk-30

Note the red square in the plot above. Below we zoom into that square.

if(require(ggplot2)){
  gg <- ggplot()+
    tree.fill.scale+
    geom_point(aes(
      xcoord, ycoord, fill=y),
      shape=21,
      data=AZdt)+
    coord_equal()+
    scale_x_continuous(
      limits=x.min.max)+
    scale_y_continuous(
      limits=y.min.max)
  if(require(directlabels)){
    gg <- gg+geom_dl(aes(
      xcoord, ycoord, label=paste("polygon",polygon)),
      data=AZdt,
      method=list(cex=2, "smart.grid"))
  }
  gg
}
#> Loading required package: directlabels
#> Warning: Removed 5927 rows containing missing values or values outside the scale range
#> (`geom_point()`).
#> Warning: Removed 5927 rows containing missing values or values outside the scale range
#> (`geom_dl()`).

plot of chunk unnamed-chunk-31

In the plot above, we see that there are several groups of points, each with a black number. Each group of points comes from a single polygon (label drawn in GIS software), and the black number is the polygon ID number. So each polygon represents one label, either tree or not, and there are one or more points/pixels with that label inside each polygon.

A polygon is an example of a group. Each polygon results in one or more rows of training data (pixels), but since pixels in a given group were all labeled together, we would like to keep them together when splitting the data.

What is a subset?

Below we plot the same data, but this time colored by region.

##dput(RColorBrewer::brewer.pal(3,"Dark2"))
region.colors <- c(NW="#1B9E77", NE="#D95F02", S="#7570B3")
if(require(ggplot2)){
  ggplot()+
    tree.fill.scale+
    scale_color_manual(
      values=region.colors)+
    geom_point(aes(
      xcoord, ycoord, color=region3, fill=y),
      shape=21,
      data=AZdt)+
    coord_equal()
}

plot of chunk unnamed-chunk-32

We can see in the plot above that there are three values in the region3 column: NE, NW, and S (different geographical regions on the map which are well-separated). We would like to know if it is possible to train on one region, and then accurately predict on another region.

Cross-validation

First we create a task:

ctask <- mlr3::TaskClassif$new(
  "AZtrees", AZdt, target="y")
ctask$col_roles$subset <- "region3"
ctask$col_roles$group <- "polygon"
ctask$col_roles$stratum <- "y"
ctask$col_roles$feature <- grep("SAMPLE",names(AZdt),value=TRUE)
str(ctask$col_roles)
#> List of 10
#>  $ feature        : chr [1:21] "SAMPLE_1" "SAMPLE_2" "SAMPLE_3" "SAMPLE_4" ...
#>  $ target         : chr "y"
#>  $ name           : chr(0) 
#>  $ order          : chr(0) 
#>  $ stratum        : chr "y"
#>  $ group          : chr "polygon"
#>  $ offset         : chr(0) 
#>  $ weights_learner: chr(0) 
#>  $ weights_measure: chr(0) 
#>  $ subset         : chr "region3"

Then we can instantiate the CV to see how it works (but usually you do not need to instantiate, if you are using benchmark it does it for you).

same.other.cv <- mlr3resampling::ResamplingSameOtherSizesCV$new()
same.other.cv$param_set$values$folds <- 3
same.other.cv$instantiate(ctask)
same.other.cv$instance$iteration.dt[, .(
  train.subsets, test.fold, test.subset, n.train.groups,
  train.rows=sapply(train, length))]
#>     train.subsets test.fold test.subset n.train.groups train.rows
#>            <char>     <int>      <char>          <int>      <int>
#>  1:           all         1          NE            125       3108
#>  2:           all         1          NW            125       3108
#>  3:           all         1           S            125       3108
#>  4:           all         2          NE            125       4325
#>  5:           all         2          NW            125       4325
#>  6:           all         2           S            125       4325
#>  7:           all         3          NE            125       4479
#>  8:           all         3          NW            125       4479
#>  9:           all         3           S            125       4479
#> 10:         other         1          NE             55       1934
#> 11:         other         1          NW            104       1652
#> 12:         other         1           S             91       2630
#> 13:         other         2          NE             55       3550
#> 14:         other         2          NW            104       3524
#> 15:         other         2           S             91       1576
#> 16:         other         3          NE             55       3500
#> 17:         other         3          NW            104       3610
#> 18:         other         3           S             91       1848
#> 19:          same         1          NE             70       1174
#> 20:          same         1          NW             21       1456
#> 21:          same         1           S             34        478
#> 22:          same         2          NE             70        775
#> 23:          same         2          NW             21        801
#> 24:          same         2           S             34       2749
#> 25:          same         3          NE             70        979
#> 26:          same         3          NW             21        869
#> 27:          same         3           S             34       2631
#>     train.subsets test.fold test.subset n.train.groups train.rows
#>            <char>     <int>      <char>          <int>      <int>

The table above has one row per train/test split for which error/accuracy metrics will be computed. The n.train.groups column is the number of polygons which are used in the train set, which is defined as the intersection of the train subsets and the train folds. To double check, below we compute the total number of groups/polygons per subset/region, and the expected number of train groups/polygons.

AZdt[, .(
  polygons=length(unique(polygon))
), by=region3][
, train.polygons := polygons*with(same.other.cv$param_set$values, (folds-1)/folds)
][]
#>    region3 polygons train.polygons
#>     <char>    <int>          <num>
#> 1:      NE      105       70.00000
#> 2:      NW       32       21.33333
#> 3:       S       52       34.66667

It is clear that the counts in the train.polygons column above match the numbers in the previous table column n.train.groups. To determine the number of rows of train data, we can look at the train.rows column in the previous table.

Benchmark and test error computation

Below we define the benchmark experiment.

same.other.cv <- mlr3resampling::ResamplingSameOtherSizesCV$new()
(learner.list <- list(
  mlr3::LearnerClassifFeatureless$new()))
#> [[1]]
#> 
#> ── <LearnerClassifFeatureless> (classif.featureless): Featureless Classification
#> • Model: -
#> • Parameters: method=mode
#> • Packages: mlr3
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, character, factor, ordered,
#> POSIXct, and Date
#> • Encapsulation: none (fallback: -)
#> • Properties: featureless, importance, missings, multiclass, selected_features,
#> twoclass, and weights
#> • Other settings: use_weights = 'use'
if(requireNamespace("rpart")){
  learner.list$rpart <- mlr3::LearnerClassifRpart$new()
}
for(learner.i in seq_along(learner.list)){
  learner.list[[learner.i]]$predict_type <- "prob"
}
set.seed(1)
(bench.grid <- mlr3::benchmark_grid(ctask, learner.list, same.other.cv))
#>       task             learner          resampling
#>     <char>              <char>              <char>
#> 1: AZtrees classif.featureless same_other_sizes_cv
#> 2: AZtrees       classif.rpart same_other_sizes_cv

Above we see one row per combination of task, learner, and resampling. Below we compute the benchmark result and test accuracy.

bench.result <- mlr3::benchmark(bench.grid)
measure.list <- mlr3::msrs(c("classif.acc","classif.auc"))
score.dt <- mlr3resampling::score(bench.result, measure.list)
score.dt[1]
#>    test.subset train.subsets groups test.fold                       test
#>         <char>        <char>  <int>     <int>                     <list>
#> 1:          NE           all    125         1  9,10,11,12,13,14,...[352]
#>                    train  seed n.train.groups iteration Train_subsets
#>                   <list> <int>          <int>     <int>        <fctr>
#> 1: 1,2,3,4,5,6,...[4462]     1            125         1           all
#>                                   uhash    nr                  task task_id
#>                                  <char> <int>                <list>  <char>
#> 1: de0d84c2-3f6f-4e75-9b45-583693d42438     1 <TaskClassif:AZtrees> AZtrees
#>                                            learner          learner_id
#>                                             <list>              <char>
#> 1: <LearnerClassifFeatureless:classif.featureless> classif.featureless
#>                      resampling       resampling_id     prediction_test
#>                          <list>              <char>              <list>
#> 1: <ResamplingSameOtherSizesCV> same_other_sizes_cv <PredictionClassif>
#>    classif.acc classif.auc   algorithm
#>          <num>       <num>      <char>
#> 1:   0.7840909         0.5 featureless

Above we see one row of the result, for one train/test split. Below we plot the accuracy results using two different methods.

score.long <- melt(
  score.dt,
  measure.vars=measure(variable, pattern="classif.(acc|auc)"))
if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      value, train.subsets, color=algorithm),
      data=score.long)+
    facet_grid(test.subset ~ variable, labeller=label_both, scales="free")
}

plot of chunk unnamed-chunk-38

Above we show one dot per train/test split, and another way to do that is via the plot method, as below.

plot(score.dt)+my_theme

plot of chunk unnamed-chunk-39

Below we take the mean/SD over folds.

score.wide <- dcast(
  score.long,
  algorithm + test.subset + train.subsets + variable ~ .,
  list(mean, sd),
  value.var="value")
if(require(ggplot2)){
  ggplot()+
    geom_point(aes(
      value_mean, train.subsets, color=algorithm),
      size=3,
      fill="white",
      shape=21,
      data=score.wide)+
    geom_segment(aes(
      value_mean+value_sd, train.subsets,
      color=algorithm,
      linewidth=algorithm,
      xend=value_mean-value_sd, yend=train.subsets),
      data=score.wide)+
    scale_linewidth_manual(values=c(featureless=2, rpart=1))+
    facet_grid(test.subset ~ variable, labeller=label_both, scales="free")+
    scale_x_continuous(
      "Mean +/- SD of test accuracy/AUC over folds/splits")
}

plot of chunk unnamed-chunk-40

The plot above shows an interesting pattern:

Another way to visualize these patterns is via the plot method for pvalue objects, as below.

AZ_pval <- mlr3resampling::pvalue(score.dt, digits=3)
plot(AZ_pval)+my_theme

plot of chunk unnamed-chunk-41

The figure above shows P-values for classification accuracy (by default the first measure is used). If we want to compute P-values for AUC, we can use the code below:

AZ_pval_AUC <- mlr3resampling::pvalue(score.dt, "classif.auc", digits=3)
plot(AZ_pval_AUC)+my_theme

plot of chunk unnamed-chunk-42

Conclusion

Column roles group, stratum, and subset may be used together, in the same task, in order to perform a cross-validation experiment which captures the structure in the data.

Session info

sessionInfo()
#> R version 4.5.2 (2025-10-31 ucrt)
#> Platform: x86_64-w64-mingw32/x64
#> Running under: Windows 11 x64 (build 26100)
#> 
#> Matrix products: default
#>   LAPACK version 3.12.1
#> 
#> locale:
#> [1] LC_COLLATE=C                          
#> [2] LC_CTYPE=English_United States.utf8   
#> [3] LC_MONETARY=English_United States.utf8
#> [4] LC_NUMERIC=C                          
#> [5] LC_TIME=English_United States.utf8    
#> 
#> time zone: America/Toronto
#> tzcode source: internal
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] directlabels_2025.6.24    mlr3resampling_2025.11.19
#> [3] mlr3_1.2.0                future_1.68.0            
#> [5] ggplot2_4.0.1             data.table_1.17.99       
#> 
#> loaded via a namespace (and not attached):
#>  [1] future.apply_1.20.0  gtable_0.3.6         crayon_1.5.3        
#>  [4] dplyr_1.1.4          compiler_4.5.2       rpart_4.1.24        
#>  [7] tidyselect_1.2.1     parallel_4.5.2       globals_0.18.0      
#> [10] scales_1.4.0         uuid_1.2-1           R6_2.6.1            
#> [13] mlr3tuning_1.5.0     labeling_0.4.3       generics_0.1.4      
#> [16] knitr_1.50           palmerpenguins_0.1.1 backports_1.5.0     
#> [19] checkmate_2.3.3      tibble_3.3.0         paradox_1.0.1       
#> [22] pillar_1.11.1        RColorBrewer_1.1-3   mlr3measures_1.1.0  
#> [25] rlang_1.1.6          lgr_0.5.0            xfun_0.54           
#> [28] quadprog_1.5-8       mlr3misc_0.19.0      S7_0.2.1            
#> [31] cli_3.6.5            withr_3.0.2          magrittr_2.0.4      
#> [34] digest_0.6.38        grid_4.5.2           bbotk_1.8.0         
#> [37] lifecycle_1.0.4      vctrs_0.6.5          evaluate_1.0.5      
#> [40] glue_1.8.0           farver_2.1.2         listenv_0.10.0      
#> [43] codetools_0.2-20     parallelly_1.45.1    tools_4.5.2         
#> [46] pkgconfig_2.0.3