alibi_detect.utils.pytorch.prediction module

alibi_detect.utils.pytorch.prediction.predict_batch(x, model, device=None, batch_size=10000000000, preprocess_fn=None, dtype=numpy.float32)[source]

Make batch predictions on a model.

Parameters
  • x (Union[list, ndarray, Tensor]) – Batch of instances.

  • model (Union[Callable, Module, Sequential]) – PyTorch model.

  • device (Optional[device]) – Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either torch.device(‘cuda’) or torch.device(‘cpu’).

  • batch_size (int) – Batch size used during prediction.

  • preprocess_fn (Optional[Callable]) – Optional preprocessing function for each batch.

  • dtype (Union[Type[generic], dtype]) – Model output type, e.g. np.float32 or torch.float32.

Return type

Union[ndarray, Tensor, tuple]

Returns

Numpy array, torch tensor or tuples of those with model outputs.

alibi_detect.utils.pytorch.prediction.predict_batch_transformer(x, model, tokenizer, max_len, device=None, batch_size=10000000000, dtype=numpy.float32)[source]

Make batch predictions using a transformers tokenizer and model.

Parameters
  • x (Union[list, ndarray]) – Batch of instances.

  • model (Union[Module, Sequential]) – PyTorch model.

  • tokenizer (Callable) – Tokenizer for model.

  • max_len (int) – Max sequence length for tokens.

  • device (Optional[device]) – Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either torch.device(‘cuda’) or torch.device(‘cpu’).

  • batch_size (int) – Batch size used during prediction.

  • dtype (Union[Type[generic], dtype]) – Model output type, e.g. np.float32 or torch.float32.

Return type

Union[ndarray, Tensor, tuple]

Returns

Numpy array or torch tensor with model outputs.