alibi_detect.models package¶
-
class
alibi_detect.models.
AE
(encoder_net, decoder_net, name='ae')[source]¶ Bases:
tensorflow.keras.Model
-
__init__
(encoder_net, decoder_net, name='ae')[source]¶ Combine encoder and decoder in AE.
- Parameters
encoder_net (
Sequential
) – Layers for the encoder wrapped in a tf.keras.Sequential class.decoder_net (
Sequential
) – Layers for the decoder wrapped in a tf.keras.Sequential class.name (
str
) – Name of autoencoder model.
- Return type
None
-
-
class
alibi_detect.models.
AEGMM
(encoder_net, decoder_net, gmm_density_net, n_gmm, recon_features=<function eucl_cosim_features>, name='aegmm')[source]¶ Bases:
tensorflow.keras.Model
-
__init__
(encoder_net, decoder_net, gmm_density_net, n_gmm, recon_features=<function eucl_cosim_features>, name='aegmm')[source]¶ Deep Autoencoding Gaussian Mixture Model.
- Parameters
encoder_net (
Sequential
) – Layers for the encoder wrapped in a tf.keras.Sequential class.decoder_net (
Sequential
) – Layers for the decoder wrapped in a tf.keras.Sequential class.gmm_density_net (
Sequential
) – Layers for the GMM network wrapped in a tf.keras.Sequential class.n_gmm (
int
) – Number of components in GMM.recon_features (
Callable
) – Function to extract features from the reconstructed instance by the decoder.name (
str
) – Name of the AEGMM model.
- Return type
None
-
-
class
alibi_detect.models.
VAE
(encoder_net, decoder_net, latent_dim, beta=1.0, name='vae')[source]¶ Bases:
tensorflow.keras.Model
-
__init__
(encoder_net, decoder_net, latent_dim, beta=1.0, name='vae')[source]¶ Combine encoder and decoder in VAE.
- Parameters
encoder_net (
Sequential
) – Layers for the encoder wrapped in a tf.keras.Sequential class.decoder_net (
Sequential
) – Layers for the decoder wrapped in a tf.keras.Sequential class.latent_dim (
int
) – Dimensionality of the latent space.beta (
float
) – Beta parameter for KL-divergence loss term.name (
str
) – Name of VAE model.
- Return type
None
-
-
class
alibi_detect.models.
VAEGMM
(encoder_net, decoder_net, gmm_density_net, n_gmm, latent_dim, recon_features=<function eucl_cosim_features>, beta=1.0, name='vaegmm')[source]¶ Bases:
tensorflow.keras.Model
-
__init__
(encoder_net, decoder_net, gmm_density_net, n_gmm, latent_dim, recon_features=<function eucl_cosim_features>, beta=1.0, name='vaegmm')[source]¶ Variational Autoencoding Gaussian Mixture Model.
- Parameters
encoder_net (
Sequential
) – Layers for the encoder wrapped in a tf.keras.Sequential class.decoder_net (
Sequential
) – Layers for the decoder wrapped in a tf.keras.Sequential class.gmm_density_net (
Sequential
) – Layers for the GMM network wrapped in a tf.keras.Sequential class.n_gmm (
int
) – Number of components in GMM.latent_dim (
int
) – Dimensionality of the latent space.recon_features (
Callable
) – Function to extract features from the reconstructed instance by the decoder.beta (
float
) – Beta parameter for KL-divergence loss term.name (
str
) – Name of the VAEGMM model.
- Return type
None
-
-
class
alibi_detect.models.
PixelCNN
(image_shape, conditional_shape=None, num_resnet=5, num_hierarchies=3, num_filters=160, num_logistic_mix=10, receptive_field_dims=(3, 3), dropout_p=0.5, resnet_activation='concat_elu', l2_weight=0.0, use_weight_norm=True, use_data_init=True, high=255, low=0, dtype=tensorflow.compat.v2.float32, name='PixelCNN')[source]¶ Bases:
tensorflow_probability.python.distributions.distribution.Distribution
-
__init__
(image_shape, conditional_shape=None, num_resnet=5, num_hierarchies=3, num_filters=160, num_logistic_mix=10, receptive_field_dims=(3, 3), dropout_p=0.5, resnet_activation='concat_elu', l2_weight=0.0, use_weight_norm=True, use_data_init=True, high=255, low=0, dtype=tensorflow.compat.v2.float32, name='PixelCNN')[source]¶ Construct Pixel CNN++ distribution.
- Parameters
image_shape (
tuple
) – 3D TensorShape or tuple for the [height, width, channels] dimensions of the image.conditional_shape (
Optional
[tuple
]) – TensorShape or tuple for the shape of the conditional input, or None if there is no conditional input.num_resnet (
int
) – The number of layers (shown in Figure 2 of [2]) within each highest-level block of Figure 2 of [1].num_hierarchies (
int
) – The number of highest-level blocks (separated by expansions/contractions of dimensions in Figure 2 of [1].)num_filters (
int
) – The number of convolutional filters.num_logistic_mix (
int
) – Number of components in the logistic mixture distribution.receptive_field_dims (
tuple
) – Height and width in pixels of the receptive field of the convolutional layers above and to the left of a given pixel. The width (second element of the tuple) should be odd. Figure 1 (middle) of [2] shows a receptive field of (3, 5) (the row containing the current pixel is included in the height). The default of (3, 3) was used to produce the results in [1].dropout_p (
float
) – The dropout probability. Should be between 0 and 1.resnet_activation (
str
) – The type of activation to use in the resnet blocks. May be ‘concat_elu’, ‘elu’, or ‘relu’.use_weight_norm (
bool
) – If True then use weight normalization (works only in Eager mode).use_data_init (
bool
) – If True then use data-dependent initialization (has no effect if use_weight_norm is False).high (
int
) – The maximum value of the input data (255 for an 8-bit image).low (
int
) – The minimum value of the input data.dtype – Data type of the Distribution.
name (
str
) – The name of the Distribution.
- Return type
None
-