Robust PCA (PCA = Principal Component Analysis) refers to an implementation of the PCA algorithm that is robust against outliers in the dataset.

We have discussed methods to detect and remove outliers in spectral data using the Mahalanobis distance or the PLS decomposition. Here we discuss an example of *neglecting* outliers, using a robust implementation of a known algorithm such as PCA.

In writing this post, I’ve been inspired by Chapter 3 of Varmuza & Filzmoser, Introduction to Multivariate Statistical Analysis in Chemometrics (2008).

## Back to basics: the PCA algorithm

If you check the Wikipedia page on PCA or Section 3.6 of Varmuza & Filzmoser (2008), you’ll learn that PCA decomposition of a dataset **X** can be calculated in two ways:

- By calculating the eigenvectors of the covariance matrix of
**X**. - Using an approach based on Singular Value Decomposition

Incidentally, the first approach is explained in detailed in the very first post on this blog (see also our post on PCA and Kernel PCA). On the contrary, the scikit-learn implementation of the PCA algorithm uses the singular value decomposition approach, which is faster and better behaved, especially for larger datasets. The presence of outliers however can produce deviations in the calculated components that cannot be easily eliminated.

The approach based on the covariance matrix can help with that, since one can implement a robust estimation of the covariance matrix which is less sensitive to outliers.

In the next section, we will describe an example of a robust PCA implementation using existing functions in scikit-learn in Python.

## Robust PCA using scikit-learn in Python

Let’s begin with the relevant imports

1 2 3 4 5 6 7 8 |
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.covariance import MinCovDet |

and import the data from our GitHub repository. The data consists of NIR spectra of milk.

1 2 3 4 5 6 |
# Read data url = 'https://raw.githubusercontent.com/nevernervous78/nirpyresearch/master/data/milk.csv' data = pd.read_csv(url) # Assign spectra to the array X X = data.values[:,2:].astype('float32') |

Now we create a copy of the dataset and we modify a single spectrum (the first in the array) to turn it into an outlier

1 2 3 |
# Generate a copy of X introducing a single outlier X_copy = np.copy(X) X_copy[0,:] = X_copy[0,:]**3 +12*(X_copy[0,:]) |

The idea here is that, if even a single spectrum in a dataset is significantly different from the rest, this can introduce measurable differences in the PCA decomposition.

To verify this statement, let’s build two identical pipelines, consisting of mean-centring and PCA decomposition. We fit the first pipeline to the original data, and the second pipeline to the data with one outlier

1 2 3 4 5 |
pipe1 = Pipeline([('scaler', StandardScaler(with_std=False)), ('pca', PCA())]) pipe2 = Pipeline([('scaler', StandardScaler(with_std=False)), ('pca', PCA())]) pipe1.fit(X) pipe2.fit(X_copy) |

Let’s compare the first components of each decomposition

1 2 3 4 5 6 7 |
plt.plot(pipe1["pca"].components_[0], 'b', label="Original PCA") plt.plot(pipe2["pca"].components_[0], 'g', label="Original PCA with outliers") plt.ylabel("First PC") plt.xlabel("Wavelength index") plt.legend() plt.show() |

The two components are markedly different. That single outlier is enough to distort the PCA calculation.

We can circumvent this issue by calculating the eigenvectors of the covariance matrix. The original calculation of the PCA with this approach is described in this post. Now we replace the conventional estimation of the covariance matrix, with a robust estimator, available in scikit-learn. Here’s the code

1 2 3 4 5 6 7 8 9 10 11 |
# Robust estimation of the covariance matrix robust_cov = MinCovDet().fit(StandardScaler(with_std=False).fit_transform(X_copy)) V = robust_cov.covariance_ # Calculate eigenvalues and eigenvectors of the covariance matrix values, vectors = np.linalg.eig(V) # Sort eigenvalues in descending order sorted_values = np.argsort(values)[::-1] # Sort eigenvectors sorted_vectors = vectors[:,sorted_values] |

Let’s compare the components now.

As anticipated, the first component, calculated with the robust version of the PCA algorithm, is essentially identical to the original first component, the one calculated without outliers. The robust implementation of the covariance matrix is unaffected by the outliers, so that in turns the PCA decomposition is unaffected by the outliers.

## A caveat, and a possible workaround

As you have probably noticed, in the code above we centred the data but we avoided the actual scaling, i.e. we didn’t normalise the data by the standard deviation. The reason is that the estimation of the standard deviation itself is not robust. One large outlier can easily distort it. You can easily verify that, if you run the code above by setting with_std=False , the result will not be correct.

If one needs to calculate the standard deviation in a robust way, the solution is again the MinCovDet function used above. The variance of a dataset is the diagonal of the covariance matrix. Therefore, we can go for a robust estimation of the covariance matrix, extract the variance and take the square root of it to get the standard deviation. Here’s the comparison

1 2 3 4 5 6 7 8 9 |
# Robust standard deviation calculation mcd = MinCovDet() mcd.fit(X_copy) std1 = np.sqrt(np.diagonal(mcd.covariance_)) plt.plot(np.std(X, axis=0), label="Standard Deviation, no outliers") plt.plot(std1, label="Robust Standard Deviation, w outliers") plt.xlabel("Wavelength index") plt.legend() |

The two estimations are identical, which means we could use the robust standard deviation estimated through the MinCovDet to standardise the data.

Note that, the simple application of the Robust Scaler (which sounds like the one we would need) does not actually work in this case, as its calculation of the standard deviation is still affected by the large outlier.

As always, thanks for reading until the end and until next time.

Daniel