This page was generated from doc/source/methods/ksdrift.ipynb.

source

Kolmogorov-Smirnov

Overview

The drift detector applies feature-wise two-sample Kolmogorov-Smirnov (K-S) tests. For multivariate data, the obtained p-values for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur.

For high-dimensional data, we typically want to reduce the dimensionality before computing the feature-wise univariate K-S tests and aggregating those via the chosen correction method. Following suggestions in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift, we incorporate Untrained AutoEncoders (UAE), black-box shift detection using the classifier’s softmax outputs (BBSDs) and PCA as out-of-the box preprocessing methods. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift. The adversarial detector which is part of the library can also be transformed into a drift detector picking up drift that reduces the performance of the classification model. We can therefore combine different preprocessing techniques to figure out if there is drift which hurts the model performance, and whether this drift can be classified as input drift or label shift.

Usage

Initialize

Parameters:

  • p_val: p-value used for significance of the K-S test for each feature. If the FDR correction method is used, this corresponds to the acceptable q-value.

  • X_ref: Data used as reference distribution.

  • update_X_ref: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {‘last’: N} while for reservoir sampling {‘reservoir_sampling’: N} is passed.

  • preprocess_fn: Function to preprocess the data before computing the data drift metrics. Typically a dimensionality reduction technique. The out-of-the box methods UAE, BBSDs and PCA are illustrated in the example notebook.

  • preprocess_kwargs: Keyword arguments for preprocess_fn. Again see the notebook for concrete examples.

  • correction: Correction type for multivariate data. Either ‘bonferroni’ or ‘fdr’ (False Discovery Rate).

  • alternative: Defines the alternative hypothesis. Options are ‘two-sided’ (default), ‘less’ or ‘greater’.

  • n_features: Number of features used in the K-S test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.

  • n_infer: If the number of features need to be inferred after the preprocessing step, we can specify the number of instances used to infer the number of features from since this can depend on the specific preprocessing step.

  • data_type: can specify data type added to metadata. E.g. ‘tabular’ or ‘image’.

Initialized drift detector example:

from alibi_detect.cd import KSDrift
from alibi_detect.cd.preprocess import uae  # Untrained AutoEncoder

encoder_net = tf.keras.Sequential(
  [
      InputLayer(input_shape=(32, 32, 3)),
      Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
      Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
      Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu),
      Flatten(),
      Dense(32,)
  ]
)

cd = KSDrift(
    p_val=0.05,
    X_ref=X_ref,
    preprocess_fn=uae,
    preprocess_kwargs={'encoder_net': encoder_net, 'batch_size': 128},
    alternative='two-sided',
    correction='bonferroni'
)

Detect Drift

We detect data drift by simply calling predict on a batch of instances X. We can return the feature-wise p-values before the multivariate correction by setting return_p_val to True. The drift can also be detected at the feature level by setting drift_type to ‘feature’. No multivariate correction will take place since we return the output of n_features univariate tests. For drift detection on all the features combined with the correction, use ‘batch’.

The prediction takes the form of a dictionary with meta and data keys. meta contains the detector’s metadata while data is also a dictionary which contains the actual predictions stored in the following keys:

  • is_drift: 1 if the sample tested has drifted from the reference data and 0 otherwise.

  • p_val: contains feature-level p-values if return_p_val equals True.

preds_drift = cd.predict(X, drift_type='batch', return_p_val=True)