Diffusers BOINC AI docs
  • 🌍GET STARTED
    • Diffusers
    • Quicktour
    • Effective and efficient diffusion
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Understanding models and schedulers
    • AutoPipeline
    • Train a diffusion model
  • 🌍USING DIFFUSERS
    • 🌍LOADING & HUB
      • Overview
      • Load pipelines, models, and schedulers
      • Load and compare different schedulers
      • Load community pipelines
      • Load safetensors
      • Load different Stable Diffusion formats
      • Push files to the Hub
    • 🌍TASKS
      • Unconditional image generation
      • Text-to-image
      • Image-to-image
      • Inpainting
      • Depth-to-image
    • 🌍TECHNIQUES
      • Textual inversion
      • Distributed inference with multiple GPUs
      • Improve image quality with deterministic generation
      • Control image brightness
      • Prompt weighting
    • 🌍PIPELINES FOR INFERENCE
      • Overview
      • Stable Diffusion XL
      • ControlNet
      • Shap-E
      • DiffEdit
      • Distilled Stable Diffusion inference
      • Create reproducible pipelines
      • Community pipelines
      • How to contribute a community pipeline
    • 🌍TRAINING
      • Overview
      • Create a dataset for training
      • Adapt a model to a new task
      • Unconditional image generation
      • Textual Inversion
      • DreamBooth
      • Text-to-image
      • Low-Rank Adaptation of Large Language Models (LoRA)
      • ControlNet
      • InstructPix2Pix Training
      • Custom Diffusion
      • T2I-Adapters
    • 🌍TAKING DIFFUSERS BEYOND IMAGES
      • Other Modalities
  • 🌍OPTIMIZATION/SPECIAL HARDWARE
    • Overview
    • Memory and Speed
    • Torch2.0 support
    • Stable Diffusion in JAX/Flax
    • xFormers
    • ONNX
    • OpenVINO
    • Core ML
    • MPS
    • Habana Gaudi
    • Token Merging
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Controlled generation
    • How to contribute?
    • Diffusers' Ethical Guidelines
    • Evaluating Diffusion Models
  • 🌍API
    • 🌍MAIN CLASSES
      • Attention Processor
      • Diffusion Pipeline
      • Logging
      • Configuration
      • Outputs
      • Loaders
      • Utilities
      • VAE Image Processor
    • 🌍MODELS
      • Overview
      • UNet1DModel
      • UNet2DModel
      • UNet2DConditionModel
      • UNet3DConditionModel
      • VQModel
      • AutoencoderKL
      • AsymmetricAutoencoderKL
      • Tiny AutoEncoder
      • Transformer2D
      • Transformer Temporal
      • Prior Transformer
      • ControlNet
    • 🌍PIPELINES
      • Overview
      • AltDiffusion
      • Attend-and-Excite
      • Audio Diffusion
      • AudioLDM
      • AudioLDM 2
      • AutoPipeline
      • Consistency Models
      • ControlNet
      • ControlNet with Stable Diffusion XL
      • Cycle Diffusion
      • Dance Diffusion
      • DDIM
      • DDPM
      • DeepFloyd IF
      • DiffEdit
      • DiT
      • IF
      • PaInstructPix2Pix
      • Kandinsky
      • Kandinsky 2.2
      • Latent Diffusionge
      • MultiDiffusion
      • MusicLDM
      • PaintByExample
      • Parallel Sampling of Diffusion Models
      • Pix2Pix Zero
      • PNDM
      • RePaint
      • Score SDE VE
      • Self-Attention Guidance
      • Semantic Guidance
      • Shap-E
      • Spectrogram Diffusion
      • 🌍STABLE DIFFUSION
        • Overview
        • Text-to-image
        • Image-to-image
        • Inpainting
        • Depth-to-image
        • Image variation
        • Safe Stable Diffusion
        • Stable Diffusion 2
        • Stable Diffusion XL
        • Latent upscaler
        • Super-resolution
        • LDM3D Text-to-(RGB, Depth)
        • Stable Diffusion T2I-adapter
        • GLIGEN (Grounded Language-to-Image Generation)
      • Stable unCLIP
      • Stochastic Karras VE
      • Text-to-image model editing
      • Text-to-video
      • Text2Video-Zero
      • UnCLIP
      • Unconditional Latent Diffusion
      • UniDiffuser
      • Value-guided sampling
      • Versatile Diffusion
      • VQ Diffusion
      • Wuerstchen
    • 🌍SCHEDULERS
      • Overview
      • CMStochasticIterativeScheduler
      • DDIMInverseScheduler
      • DDIMScheduler
      • DDPMScheduler
      • DEISMultistepScheduler
      • DPMSolverMultistepInverse
      • DPMSolverMultistepScheduler
      • DPMSolverSDEScheduler
      • DPMSolverSinglestepScheduler
      • EulerAncestralDiscreteScheduler
      • EulerDiscreteScheduler
      • HeunDiscreteScheduler
      • IPNDMScheduler
      • KarrasVeScheduler
      • KDPM2AncestralDiscreteScheduler
      • KDPM2DiscreteScheduler
      • LMSDiscreteScheduler
      • PNDMScheduler
      • RePaintScheduler
      • ScoreSdeVeScheduler
      • ScoreSdeVpScheduler
      • UniPCMultistepScheduler
      • VQDiffusionScheduler
Powered by GitBook
On this page
  • AutoencoderKL
  • Loading from the original format
  • AutoencoderKL
  • AutoencoderKLOutput
  • DecoderOutput
  • FlaxAutoencoderKL
  • FlaxAutoencoderKLOutput
  • FlaxDecoderOutput
  1. API
  2. MODELS

AutoencoderKL

PreviousVQModelNextAsymmetricAutoencoderKL

Last updated 1 year ago

AutoencoderKL

The variational autoencoder (VAE) model with KL loss was introduced in by Diederik P. Kingma and Max Welling. The model is used in 🌍 Diffusers to encode images into latents and to decode latent representations into images.

The abstract from the paper is:

How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.

Loading from the original format

By default the should be loaded with , but it can also be loaded from the original format using FromOriginalVAEMixin.from_single_file as follows:

Copied

from diffusers import AutoencoderKL

url = "https://boincai.com/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors"  # can also be local file
model = AutoencoderKL.from_single_file(url)

AutoencoderKL

class diffusers.AutoencoderKL

( in_channels: int = 3out_channels: int = 3down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',)up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',)block_out_channels: typing.Tuple[int] = (64,)layers_per_block: int = 1act_fn: str = 'silu'latent_channels: int = 4norm_num_groups: int = 32sample_size: int = 32scaling_factor: float = 0.18215force_upcast: float = True )

Parameters

  • in_channels (int, optional, defaults to 3) β€” Number of channels in the input image.

  • out_channels (int, optional, defaults to 3) β€” Number of channels in the output.

  • down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) β€” Tuple of downsample block types.

  • up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) β€” Tuple of upsample block types.

  • block_out_channels (Tuple[int], optional, defaults to (64,)) β€” Tuple of block output channels.

  • act_fn (str, optional, defaults to "silu") β€” The activation function to use.

  • latent_channels (int, optional, defaults to 4) β€” Number of channels in the latent space.

  • sample_size (int, optional, defaults to 32) β€” Sample input size.

A VAE model with KL loss for encoding images into latents and decoding latent representations into images.

disable_slicing

( )

Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing decoding in one step.

disable_tiling

( )

Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing decoding in one step.

enable_slicing

( )

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

enable_tiling

( use_tiling: bool = True )

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

forward

( sample: FloatTensorsample_posterior: bool = Falsereturn_dict: bool = Truegenerator: typing.Optional[torch._C.Generator] = None )

Parameters

  • sample (torch.FloatTensor) β€” Input sample.

  • sample_posterior (bool, optional, defaults to False) β€” Whether to sample from the posterior.

  • return_dict (bool, optional, defaults to True) β€” Whether or not to return a DecoderOutput instead of a plain tuple.

set_attn_processor

( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] )

Parameters

  • processor (dict of AttentionProcessor or only AttentionProcessor) β€” The instantiated processor class or a dictionary of processor classes that will be set as the processor for all Attention layers.

    If processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.

Sets the attention processor to use to compute attention.

set_default_attn_processor

( )

Disables custom attention processors and sets the default attention implementation.

tiled_decode

Parameters

  • z (torch.FloatTensor) β€” Input batch of latent vectors.

Returns

Decode a batch of images using a tiled decoder.

tiled_encode

Parameters

  • x (torch.FloatTensor) β€” Input batch of images.

Returns

Encode a batch of images using a tiled encoder.

When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the output, but they should be much less noticeable.

AutoencoderKLOutput

class diffusers.models.autoencoder_kl.AutoencoderKLOutput

( latent_dist: DiagonalGaussianDistribution )

Parameters

  • latent_dist (DiagonalGaussianDistribution) β€” Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. DiagonalGaussianDistribution allows for sampling latents from the distribution.

Output of AutoencoderKL encoding method.

DecoderOutput

class diffusers.models.vae.DecoderOutput

( sample: FloatTensor )

Parameters

  • sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) β€” The decoded output sample from the last layer of the model.

Output of decoding method.

FlaxAutoencoderKL

class diffusers.FlaxAutoencoderKL

( in_channels: int = 3out_channels: int = 3down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',)up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',)block_out_channels: typing.Tuple[int] = (64,)layers_per_block: int = 1act_fn: str = 'silu'latent_channels: int = 4norm_num_groups: int = 32sample_size: int = 32scaling_factor: float = 0.18215dtype: dtype = <class 'jax.numpy.float32'>parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = <flax.linen.module._Sentinel object at 0x7f3306dc42e0>name: typing.Optional[str] = None )

Parameters

  • in_channels (int, optional, defaults to 3) β€” Number of channels in the input image.

  • out_channels (int, optional, defaults to 3) β€” Number of channels in the output.

  • down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) β€” Tuple of downsample block types.

  • up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) β€” Tuple of upsample block types.

  • block_out_channels (Tuple[str], optional, defaults to (64,)) β€” Tuple of block output channels.

  • layers_per_block (int, optional, defaults to 2) β€” Number of ResNet layer for each block.

  • act_fn (str, optional, defaults to silu) β€” The activation function to use.

  • latent_channels (int, optional, defaults to 4) β€” Number of channels in the latent space.

  • norm_num_groups (int, optional, defaults to 32) β€” The number of groups for normalization.

  • sample_size (int, optional, defaults to 32) β€” Sample input size.

  • dtype (jnp.dtype, optional, defaults to jnp.float32) β€” The dtype of the parameters.

Flax implementation of a VAE model with KL loss for decoding latent representations.

Inherent JAX features such as the following are supported:

FlaxAutoencoderKLOutput

class diffusers.models.vae_flax.FlaxAutoencoderKLOutput

( latent_dist: FlaxDiagonalGaussianDistribution )

Parameters

  • latent_dist (FlaxDiagonalGaussianDistribution) β€” Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution.

Output of AutoencoderKL encoding method.

replace

( **updates )

β€œReturns a new object replacing the specified fields with new values.

FlaxDecoderOutput

class diffusers.models.vae_flax.FlaxDecoderOutput

( sample: Array )

Parameters

  • sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) β€” The decoded output sample from the last layer of the model.

  • dtype (jnp.dtype, optional, defaults to jnp.float32) β€” The dtype of the parameters.

Output of decoding method.

replace

( **updates )

β€œReturns a new object replacing the specified fields with new values.

scaling_factor (float, optional, defaults to 0.18215) β€” The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formula z = z * scaling_factor before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the paper.

force_upcast (bool, optional, default to True) β€” If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE can be fine-tuned / trained to a lower range without loosing too much precision in which case force_upcast can be set to False - see:

This model inherits from . Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).

( z: FloatTensorreturn_dict: bool = True ) β†’ or tuple

return_dict (bool, optional, defaults to True) β€” Whether or not to return a instead of a plain tuple.

or tuple

If return_dict is True, a is returned, otherwise a plain tuple is returned.

( x: FloatTensorreturn_dict: bool = True ) β†’ or tuple

return_dict (bool, optional, defaults to True) β€” Whether or not to return a instead of a plain tuple.

or tuple

If return_dict is True, a is returned, otherwise a plain tuple is returned.

scaling_factor (float, optional, defaults to 0.18215) β€” The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formula z = z * scaling_factor before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the paper.

This model inherits from . Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).

This model is a Flax Linen subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its general usage and behavior.

🌍
🌍
Auto-Encoding Variational Bayes
AutoencoderKL
from_pretrained()
<source>
High-Resolution Image Synthesis with Latent Diffusion Models
https://boincai.com/madebyollin/sdxl-vae-fp16-fix
ModelMixin
<source>
<source>
<source>
<source>
<source>
<source>
<source>
<source>
DecoderOutput
DecoderOutput
DecoderOutput
DecoderOutput
<source>
AutoencoderKLOutput
AutoencoderKLOutput
AutoencoderKLOutput
AutoencoderKLOutput
<source>
<source>
<source>
High-Resolution Image Synthesis with Latent Diffusion Models
FlaxModelMixin
flax.linen.Module
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
<source>
<source>
<source>
<source>