Getting Started
Installation
Alibi Detect can be installed from PyPI or conda-forge by following the instructions below.
Install via PyPI
Alibi Detect can be installed from PyPI with
pip
:
Installation with default TensorFlow backend.
pip install alibi-detect
Installation with TensorFlow and PyTorch backends.
pip install alibi-detect[torch]
Note
If you wish to use the GPU version of PyTorch, or are installing on Windows, it is recommended to install and test PyTorch prior to installing alibi-detect.
Installation with the Prophet time series outlier detector enabled.
pip install alibi-detect[prophet]
Install via conda-forge
To install the conda-forge version it is recommended to use mamba, which can be installed to the base conda enviroment with:
conda install mamba -n base -c conda-forge
mamba
can then be used to install alibi-detect in a conda enviroment:
Installation with default TensorFlow backend.
mamba install -c conda-forge alibi-detect
Installation with TensorFlow and PyTorch backends.
mamba install -c conda-forge alibi-detect pytorch
Note
If you wish to use the GPU version of PyTorch, or are installing on Windows, it is recommended to install and test PyTorch prior to installing alibi-detect.
Features
Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both TensorFlow and PyTorch backends are supported for drift detection. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
To get a list of respectively the latest outlier, adversarial and drift detection algorithms, you can type:
import alibi_detect
alibi_detect.od.__all__
['OutlierAEGMM',
'IForest',
'Mahalanobis',
'OutlierAE',
'OutlierVAE',
'OutlierVAEGMM',
'OutlierProphet', # requires prophet: pip install alibi-detect[prophet]
'OutlierSeq2Seq',
'SpectralResidual',
'LLR']
alibi_detect.ad.__all__
['AdversarialAE',
'ModelDistillation']
alibi_detect.cd.__all__
['ChiSquareDrift',
'ClassifierDrift',
'ClassifierUncertaintyDrift',
'ContextMMDDrift',
'CVMDrift',
'FETDrift',
'KSDrift',
'LearnedKernelDrift',
'LSDDDrift',
'LSDDDriftOnline',
'MMDDrift',
'MMDDriftOnline',
'RegressorUncertaintyDrift',
'SpotTheDiffDrift',
'TabularDrift']
Summary tables highlighting the practical use cases for all the algorithms can be found here.
For detailed information on the outlier detectors:
Similar for adversarial detection:
And data drift:
Basic Usage
We will use the VAE outlier detector to illustrate the usage of outlier and adversarial detectors in alibi-detect.
First, we import the detector:
from alibi_detect.od import OutlierVAE
Then we initialize it by passing it the necessary arguments:
od = OutlierVAE(
threshold=0.1,
encoder_net=encoder_net,
decoder_net=decoder_net,
latent_dim=1024
)
Some detectors require an additional .fit
step using training data:
od.fit(X_train)
The detectors can be saved or loaded as described in Saving and loading. Finally, we can make predictions on test data and detect outliers or adversarial examples.
preds = od.predict(X_test)
The predictions are returned in a dictionary with as keys meta
and data
. meta
contains the detector’s metadata while data
is in itself a dictionary with the actual predictions (and other relevant values). It has either is_outlier
, is_adversarial
or is_drift
(filled with 0’s and 1’s) as well as optional instance_score
, feature_score
or p_value
as keys with numpy arrays as values.
The exact details will vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported in alibi-detect.