alibi_detect.cd.pytorch.preprocess module

class alibi_detect.cd.pytorch.preprocess.HiddenOutput(model, layer=-1, flatten=False)[source]

Bases: Module

forward(x)[source]
Return type:

Tensor

class alibi_detect.cd.pytorch.preprocess.UAE(encoder_net=None, input_layer=None, shape=None, enc_dim=None)[source]

Bases: Module

forward(x)[source]
Return type:

Tensor

alibi_detect.cd.pytorch.preprocess.preprocess_drift(x, model, device=None, preprocess_batch_fn=None, tokenizer=None, max_len=None, batch_size=10000000000, dtype=<class 'numpy.float32'>)[source]

Prediction function used for preprocessing step of drift detector.

Parameters:
  • x (Union[ndarray, list]) – Batch of instances.

  • model (Union[Module, Sequential]) – Model used for preprocessing.

  • device (Optional[device]) – Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either torch.device(‘cuda’) or torch.device(‘cpu’).

  • preprocess_batch_fn (Optional[Callable]) – Optional batch preprocessing function. For example to convert a list of objects to a batch which can be processed by the PyTorch model.

  • tokenizer (Optional[Callable]) – Optional tokenizer for text drift.

  • max_len (Optional[int]) – Optional max token length for text drift.

  • batch_size (int) – Batch size used during prediction.

  • dtype (Union[Type[generic], dtype]) – Model output type, e.g. np.float32 or torch.float32.

Return type:

Union[ndarray, Tensor, tuple]

Returns:

Numpy array or torch tensor with predictions.