K-fold and Montecarlo cross-validation vs Bootstrap: a primer

Cross-Validation (CV) is a standard procedure to quantify the robustness of a regression model. The basic idea of CV is to test the predictive ability of a model on a set of data that has not been used to build that model. K-fold cross-validation is probably the most popular amongst the CV strategies, however other choices exist. In this tutorial we are going to look at three different strategies, namely K-fold CV, Montecarlo CV and Bootstrap. We will outline the differences between those methods and apply them with real data.

A few references that may be useful for working through this post are:

Before delving into the details, a short note on terminology. While K-Fold is a standard term for the associated algorithm, both Montecarlo and Bootstrap are often found under different names. Montecarlo CV is also called Shuffle Split or Random Permutation CV. Bootstrap (or bootstrapping) in machine learning is more correctly called Bootstrap Aggregating, or (shortened to) Bagging. Regardless of the terminology, we are going to define exactly what we mean by each method, so to avoid confusion when comparing different sources. I also must say that Bootstrap is not, strictly speaking, a cross-validation method (we will clarify this later), but we are going to use it in a similar fashion as the other two methods.

The data used in this articles are taken from the paper Near‐infrared spectroscopy (NIRS) predicts non‐structural carbohydrate concentrations in different tissue types of a broad range of tree species, by J. A. Ramirez et al. The associated dataset is freely available for download here.

K-fold, Montecarlo and Bootstrap

To quantify the robustness of a regression model with a single dataset — i.e. without an independent test set — requires splitting the available data into two or more subsets, so that model fitting and prediction can be performed on independent subsets. This typical strategy can be implemented in various ways, all aimed at avoiding overfitting. Overfitting refers to the situation in which the regression model is able to predict the training set with high accuracy, but does a terrible job at predicting new, independent, data. In other words, the model is mostly built on features that are specific to training set, with can bear little correlation with new data. A cross-validation strategy avoids or mitigates this occurrence.

In this section we are going to define K-fold, Montecarlo and Bootstrap using Scikit-learn in Python and describe their action on a simple array. In the next section we are going to discuss the comparison between the methods.

Let’s begin with the imports (some of which will be used later).

Let’s now define a simple array, of just 10 elements:

We are going to apply the different methods on this simple array, which makes it easier to check the action of the different operations. Let’s start with the K-Fold.

The class Kfold is defined by passing the number of splits (“folds”) which we are going to divide our array into, the “shuffle” keywords that indicates whether we want a random permutation of the entries before splitting, and the (optional) random state for reproducibility.

After returning the number of splitting iterations, the method kf.split(ar) produces what in Python is called a generator, that is a function that can be used like an iterator in a loop. In this case the function yields the indices that split the original array into train and test set. Since we have defined three splits, this procedure is going to be repeated three times, with (approximately) a third of the data reserved for the test set. If you ruin the previous snippet you get:

The main feature of the K-Fold approach is that each data point appears exactly once in the test set, with no repetition.

***

Now let’s do the same thing for a Montecarlo CV. In scikit-learn this function is implemented through the ShuffleSplit class as follows:

Note that in this case the parameters n_splits only defines the number of times we are going to repeat the process. In order to have a test set that is comparable in size with the K-Fold case, we need to explicitly pass the parameter “test_size”, which we set to be a third of the size of the original data. This is what the code prints this time.

In this case there are repetitions in the test set. Some of the elements may appear more than once, some not at all. This is the main difference between Montecarlo and K-Fold. Unlike K-Fold, each Montecarlo split is independent from any other: the original data is randomly shuffled then split in a different way each time.

***

Implementing the Bootstrap method is a wee bit more involved, as unfortunately it doesn’t come out of the box in scikit-learn. There is a utility function however that gets us going. Let’s first check how we can implement a single split using Bootstrap.

We are going to use the function “resample”, which implements resampling with replacement. In plain English, this means it constructs an array having the same size as the original one. Each element of the new array is randomly sampled from the original array.

The result:

The resample function only produces the train set. Unlike Montecarlo, each individual element of the train set is now independent from the others. That means we can have multiple occurrences of the same element in the train set! On the other hand, the size of the training set is the same as the original array. In this sense Bootstrap is not, strictly speaking, a cross-validator. The train set may contain repeated samples. However, the way we defined the test set at least guarantees that there is no overlap between the two sets, i.e. each sample in the test set is with certainty not appearing in the training set.

To define the test set we use a list comprehension. Basically we check for all elements that are not present in the training set, and assign those to the test set. Note that this time the size of the test set can be different from one repetition to the other.

This is a single split using Bootstrap. In order to use this method in a similar way as the others, we are going to define a class that implements the “split” method, so that we can use the same syntax irrespective of the method chosen.

Let’s first write down the code (to brush up on Python classes, take a look at this tutorial).

In the __init__ function we pass the important arguments, namely the number of bootstraps (it replaces the number of splits) and the random state. From what we have seen before, there is no need to pass any argument specifying the training (or test) set size. In order to reproduce the syntax of the other cross-validators, we define a function split, which takes the input data as argument. The check_random_state (another utility function of scikit-learn) make sure the random state is properly instanced. Next, we define an array of indices, going from zero to the size of the input array. At that point we use the code we saw before, producing a Bootstrap split of the indices.

Importantly, we want this function to produce a generator (not an output array) so that we can iterate over it just like K-Fold and Montecarlo split do. To do that we need to use the command “yield” instead of “return”.

Good, we can now implement Bootstrap as we have done with the other cross-validators.

Note that in this simple case, we avoided defining a function get_n_splits altogether.

Comparison between the methods using PLS regression on NIR data

OK, it’s finally time to get cracking with the data. We are going to compare the different methods applied to a PLS regression of NIR data.

The dataset can be downloaded at the location linked in the introduction to this post. The data needs some massaging before being useable with our functions. I’m going to skip the code for data preparation for now, but you can find the code at the end of this post. For now assume that we have imported the scans into an array X, and the labels into an array y, with the usual meaning of the variables.

The first step is to define a basic PLS function which fits the regressor on a training set and provides a prediction on a test set. We need to keep in mind that we need to specify the number of components in PLS (latent variables). Here’s the code.

Now we can define the cross-validation function. The workflow of the function is:

  1. Define the cross-validation method and the number of splits
  2. Loop over the number of latent variables (PLS components)
  3. For each value of the PLS components run a CV procedure according to the parameters defined in 1.
  4. The CV procedure consist of running the base_pls function as many times as the splits, and calculating the RMSE in cross validation
  5. Finding the number of PLS components that minimises the RMSE.
  6. Running the base_pls function one more time with optimised parameters

The final hurdle is to write down a function that iterates over the number of splits and calculates comparison metrics. Essentially we want to work out, for the data at hand, what is the optimal number of CV splits and how the different methods of cross-validation compare with one another.

Since we expect significant variations from one run to the other, we are going to repeat the CV procedure 5 times for each set of conditions, and average the results. Here’s the complete code we are going to run

We run the code above three times, one for each CV method and plot the results. Here’s what we get.

First up is K-Fold

Then Montecarlo

and finally Bootstrap

Comparison and wrap-up

K-Fold and Montecarlo give quite comparable performances. If anything Montecarlo tends to be a bit more repeatable (less variance) due to the fact that we have enough samples to generate statistically different splits. The bias however is generally higher than the corresponding value for K-Fold. This thing is well known in machine learning. In fact it’s got a name: it’s called Bias-Variance tradeoff.

Bootstrap doesn’t seem to perform as good. In fact Bootstraps metrics don’t really change with increasing the number of splits, and they’re worse than the corresponding values for K-Fold and Montecarlo. The likely reason for this is the repetition of spectra in any one Bootstrap training set. More precisely, the number of samples in this case is not large enough to guarantee a satisfactory model using the training set alone. Therefore all Bootstrap models tend to be inferior in robustness compared to the other CV methods.

Anyway, we have covered a few ways to build a cross-validation procedure for your spectroscopy data. Truth be told, K-Fold CV is going to be pretty good in most cases, but by using the ideas in this post, you can build more versatile approaches for your cross-validation implementations.

Hope you enjoyed the post. If you found it useful, please share it with a friend or colleagues and help us grow. Bye for now and thanks for reading!