Diffusers BOINC AI docs
  • 🌍GET STARTED
    • Diffusers
    • Quicktour
    • Effective and efficient diffusion
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Understanding models and schedulers
    • AutoPipeline
    • Train a diffusion model
  • 🌍USING DIFFUSERS
    • 🌍LOADING & HUB
      • Overview
      • Load pipelines, models, and schedulers
      • Load and compare different schedulers
      • Load community pipelines
      • Load safetensors
      • Load different Stable Diffusion formats
      • Push files to the Hub
    • 🌍TASKS
      • Unconditional image generation
      • Text-to-image
      • Image-to-image
      • Inpainting
      • Depth-to-image
    • 🌍TECHNIQUES
      • Textual inversion
      • Distributed inference with multiple GPUs
      • Improve image quality with deterministic generation
      • Control image brightness
      • Prompt weighting
    • 🌍PIPELINES FOR INFERENCE
      • Overview
      • Stable Diffusion XL
      • ControlNet
      • Shap-E
      • DiffEdit
      • Distilled Stable Diffusion inference
      • Create reproducible pipelines
      • Community pipelines
      • How to contribute a community pipeline
    • 🌍TRAINING
      • Overview
      • Create a dataset for training
      • Adapt a model to a new task
      • Unconditional image generation
      • Textual Inversion
      • DreamBooth
      • Text-to-image
      • Low-Rank Adaptation of Large Language Models (LoRA)
      • ControlNet
      • InstructPix2Pix Training
      • Custom Diffusion
      • T2I-Adapters
    • 🌍TAKING DIFFUSERS BEYOND IMAGES
      • Other Modalities
  • 🌍OPTIMIZATION/SPECIAL HARDWARE
    • Overview
    • Memory and Speed
    • Torch2.0 support
    • Stable Diffusion in JAX/Flax
    • xFormers
    • ONNX
    • OpenVINO
    • Core ML
    • MPS
    • Habana Gaudi
    • Token Merging
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Controlled generation
    • How to contribute?
    • Diffusers' Ethical Guidelines
    • Evaluating Diffusion Models
  • 🌍API
    • 🌍MAIN CLASSES
      • Attention Processor
      • Diffusion Pipeline
      • Logging
      • Configuration
      • Outputs
      • Loaders
      • Utilities
      • VAE Image Processor
    • 🌍MODELS
      • Overview
      • UNet1DModel
      • UNet2DModel
      • UNet2DConditionModel
      • UNet3DConditionModel
      • VQModel
      • AutoencoderKL
      • AsymmetricAutoencoderKL
      • Tiny AutoEncoder
      • Transformer2D
      • Transformer Temporal
      • Prior Transformer
      • ControlNet
    • 🌍PIPELINES
      • Overview
      • AltDiffusion
      • Attend-and-Excite
      • Audio Diffusion
      • AudioLDM
      • AudioLDM 2
      • AutoPipeline
      • Consistency Models
      • ControlNet
      • ControlNet with Stable Diffusion XL
      • Cycle Diffusion
      • Dance Diffusion
      • DDIM
      • DDPM
      • DeepFloyd IF
      • DiffEdit
      • DiT
      • IF
      • PaInstructPix2Pix
      • Kandinsky
      • Kandinsky 2.2
      • Latent Diffusionge
      • MultiDiffusion
      • MusicLDM
      • PaintByExample
      • Parallel Sampling of Diffusion Models
      • Pix2Pix Zero
      • PNDM
      • RePaint
      • Score SDE VE
      • Self-Attention Guidance
      • Semantic Guidance
      • Shap-E
      • Spectrogram Diffusion
      • 🌍STABLE DIFFUSION
        • Overview
        • Text-to-image
        • Image-to-image
        • Inpainting
        • Depth-to-image
        • Image variation
        • Safe Stable Diffusion
        • Stable Diffusion 2
        • Stable Diffusion XL
        • Latent upscaler
        • Super-resolution
        • LDM3D Text-to-(RGB, Depth)
        • Stable Diffusion T2I-adapter
        • GLIGEN (Grounded Language-to-Image Generation)
      • Stable unCLIP
      • Stochastic Karras VE
      • Text-to-image model editing
      • Text-to-video
      • Text2Video-Zero
      • UnCLIP
      • Unconditional Latent Diffusion
      • UniDiffuser
      • Value-guided sampling
      • Versatile Diffusion
      • VQ Diffusion
      • Wuerstchen
    • 🌍SCHEDULERS
      • Overview
      • CMStochasticIterativeScheduler
      • DDIMInverseScheduler
      • DDIMScheduler
      • DDPMScheduler
      • DEISMultistepScheduler
      • DPMSolverMultistepInverse
      • DPMSolverMultistepScheduler
      • DPMSolverSDEScheduler
      • DPMSolverSinglestepScheduler
      • EulerAncestralDiscreteScheduler
      • EulerDiscreteScheduler
      • HeunDiscreteScheduler
      • IPNDMScheduler
      • KarrasVeScheduler
      • KDPM2AncestralDiscreteScheduler
      • KDPM2DiscreteScheduler
      • LMSDiscreteScheduler
      • PNDMScheduler
      • RePaintScheduler
      • ScoreSdeVeScheduler
      • ScoreSdeVpScheduler
      • UniPCMultistepScheduler
      • VQDiffusionScheduler
Powered by GitBook
On this page
  • Tiny AutoEncoder
  • AutoencoderTiny
  • AutoencoderTinyOutput
  1. API
  2. MODELS

Tiny AutoEncoder

PreviousAsymmetricAutoencoderKLNextTransformer2D

Last updated 1 year ago

Tiny AutoEncoder

Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a or almost instantly.

To use with Stable Diffusion v-2.1:

Copied

import torch
from diffusers import DiffusionPipeline, AutoencoderTiny

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("cheesecake.png")

To use with Stable Diffusion XL 1.0

Copied

import torch
from diffusers import DiffusionPipeline, AutoencoderTiny

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("cheesecake_sdxl.png")

AutoencoderTiny

class diffusers.AutoencoderTiny

( in_channels = 3out_channels = 3encoder_block_out_channels: typing.Tuple[int] = (64, 64, 64, 64)decoder_block_out_channels: typing.Tuple[int] = (64, 64, 64, 64)act_fn: str = 'relu'latent_channels: int = 4upsampling_scaling_factor: int = 2num_encoder_blocks: typing.Tuple[int] = (1, 3, 3, 3)num_decoder_blocks: typing.Tuple[int] = (3, 3, 3, 1)latent_magnitude: int = 3latent_shift: float = 0.5force_upcast: float = Falsescaling_factor: float = 1.0 )

Parameters

  • in_channels (int, optional, defaults to 3) β€” Number of channels in the input image.

  • out_channels (int, optional, defaults to 3) β€” Number of channels in the output.

  • encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β€” Tuple of integers representing the number of output channels for each encoder block. The length of the tuple should be equal to the number of encoder blocks.

  • decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β€” Tuple of integers representing the number of output channels for each decoder block. The length of the tuple should be equal to the number of decoder blocks.

  • act_fn (str, optional, defaults to "relu") β€” Activation function to be used throughout the model.

  • latent_channels (int, optional, defaults to 4) β€” Number of channels in the latent representation. The latent space acts as a compressed representation of the input image.

  • upsampling_scaling_factor (int, optional, defaults to 2) β€” Scaling factor for upsampling in the decoder. It determines the size of the output image during the upsampling process.

  • num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) β€” Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The length of the tuple should be equal to the number of stages in the encoder. Each stage has a different number of encoder blocks.

  • num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) β€” Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The length of the tuple should be equal to the number of stages in the decoder. Each stage has a different number of decoder blocks.

  • latent_magnitude (float, optional, defaults to 3.0) β€” Magnitude of the latent representation. This parameter scales the latent representation values to control the extent of information preservation.

  • latent_shift (float, optional, defaults to 0.5) β€” Shift applied to the latent representation. This parameter controls the center of the latent space.

A tiny distilled VAE model for encoding images into latents and decoding latent representations into images.

disable_slicing

( )

Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing decoding in one step.

disable_tiling

( )

Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing decoding in one step.

enable_slicing

( )

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

enable_tiling

( use_tiling: bool = True )

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

forward

( sample: FloatTensorreturn_dict: bool = True )

Parameters

  • sample (torch.FloatTensor) β€” Input sample.

  • return_dict (bool, optional, defaults to True) β€” Whether or not to return a DecoderOutput instead of a plain tuple.

scale_latents

( x )

raw latents -> [0, 1]

unscale_latents

( x )

[0, 1] -> raw latents

AutoencoderTinyOutput

class diffusers.models.autoencoder_tiny.AutoencoderTinyOutput

( latents: Tensor )

Parameters

  • latents (torch.Tensor) β€” Encoded outputs of the Encoder.

Output of AutoencoderTiny encoding method.

scaling_factor (float, optional, defaults to 1.0) β€” The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formula z = z * scaling_factor before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the paper. For this Autoencoder, however, no such scaling factor was used, hence the value of 1.0 as the default.

force_upcast (bool, optional, default to False) β€” If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE can be fine-tuned / trained to a lower range without losing too much precision, in which case force_upcast can be set to False (see this fp16-friendly ).

is a wrapper around the original implementation of TAESD.

This model inherits from . Check the superclass documentation for its generic methods implemented for all models (such as downloading or saving).

🌍
🌍
madebyollin/taesd
StableDiffusionPipeline
StableDiffusionXLPipeline
<source>
High-Resolution Image Synthesis with Latent Diffusion Models
AutoEncoder
AutoencoderTiny
ModelMixin
<source>
<source>
<source>
<source>
<source>
<source>
<source>
<source>