Diffusers BOINC AI docs
  • 🌍GET STARTED
    • Diffusers
    • Quicktour
    • Effective and efficient diffusion
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Understanding models and schedulers
    • AutoPipeline
    • Train a diffusion model
  • 🌍USING DIFFUSERS
    • 🌍LOADING & HUB
      • Overview
      • Load pipelines, models, and schedulers
      • Load and compare different schedulers
      • Load community pipelines
      • Load safetensors
      • Load different Stable Diffusion formats
      • Push files to the Hub
    • 🌍TASKS
      • Unconditional image generation
      • Text-to-image
      • Image-to-image
      • Inpainting
      • Depth-to-image
    • 🌍TECHNIQUES
      • Textual inversion
      • Distributed inference with multiple GPUs
      • Improve image quality with deterministic generation
      • Control image brightness
      • Prompt weighting
    • 🌍PIPELINES FOR INFERENCE
      • Overview
      • Stable Diffusion XL
      • ControlNet
      • Shap-E
      • DiffEdit
      • Distilled Stable Diffusion inference
      • Create reproducible pipelines
      • Community pipelines
      • How to contribute a community pipeline
    • 🌍TRAINING
      • Overview
      • Create a dataset for training
      • Adapt a model to a new task
      • Unconditional image generation
      • Textual Inversion
      • DreamBooth
      • Text-to-image
      • Low-Rank Adaptation of Large Language Models (LoRA)
      • ControlNet
      • InstructPix2Pix Training
      • Custom Diffusion
      • T2I-Adapters
    • 🌍TAKING DIFFUSERS BEYOND IMAGES
      • Other Modalities
  • 🌍OPTIMIZATION/SPECIAL HARDWARE
    • Overview
    • Memory and Speed
    • Torch2.0 support
    • Stable Diffusion in JAX/Flax
    • xFormers
    • ONNX
    • OpenVINO
    • Core ML
    • MPS
    • Habana Gaudi
    • Token Merging
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Controlled generation
    • How to contribute?
    • Diffusers' Ethical Guidelines
    • Evaluating Diffusion Models
  • 🌍API
    • 🌍MAIN CLASSES
      • Attention Processor
      • Diffusion Pipeline
      • Logging
      • Configuration
      • Outputs
      • Loaders
      • Utilities
      • VAE Image Processor
    • 🌍MODELS
      • Overview
      • UNet1DModel
      • UNet2DModel
      • UNet2DConditionModel
      • UNet3DConditionModel
      • VQModel
      • AutoencoderKL
      • AsymmetricAutoencoderKL
      • Tiny AutoEncoder
      • Transformer2D
      • Transformer Temporal
      • Prior Transformer
      • ControlNet
    • 🌍PIPELINES
      • Overview
      • AltDiffusion
      • Attend-and-Excite
      • Audio Diffusion
      • AudioLDM
      • AudioLDM 2
      • AutoPipeline
      • Consistency Models
      • ControlNet
      • ControlNet with Stable Diffusion XL
      • Cycle Diffusion
      • Dance Diffusion
      • DDIM
      • DDPM
      • DeepFloyd IF
      • DiffEdit
      • DiT
      • IF
      • PaInstructPix2Pix
      • Kandinsky
      • Kandinsky 2.2
      • Latent Diffusionge
      • MultiDiffusion
      • MusicLDM
      • PaintByExample
      • Parallel Sampling of Diffusion Models
      • Pix2Pix Zero
      • PNDM
      • RePaint
      • Score SDE VE
      • Self-Attention Guidance
      • Semantic Guidance
      • Shap-E
      • Spectrogram Diffusion
      • 🌍STABLE DIFFUSION
        • Overview
        • Text-to-image
        • Image-to-image
        • Inpainting
        • Depth-to-image
        • Image variation
        • Safe Stable Diffusion
        • Stable Diffusion 2
        • Stable Diffusion XL
        • Latent upscaler
        • Super-resolution
        • LDM3D Text-to-(RGB, Depth)
        • Stable Diffusion T2I-adapter
        • GLIGEN (Grounded Language-to-Image Generation)
      • Stable unCLIP
      • Stochastic Karras VE
      • Text-to-image model editing
      • Text-to-video
      • Text2Video-Zero
      • UnCLIP
      • Unconditional Latent Diffusion
      • UniDiffuser
      • Value-guided sampling
      • Versatile Diffusion
      • VQ Diffusion
      • Wuerstchen
    • 🌍SCHEDULERS
      • Overview
      • CMStochasticIterativeScheduler
      • DDIMInverseScheduler
      • DDIMScheduler
      • DDPMScheduler
      • DEISMultistepScheduler
      • DPMSolverMultistepInverse
      • DPMSolverMultistepScheduler
      • DPMSolverSDEScheduler
      • DPMSolverSinglestepScheduler
      • EulerAncestralDiscreteScheduler
      • EulerDiscreteScheduler
      • HeunDiscreteScheduler
      • IPNDMScheduler
      • KarrasVeScheduler
      • KDPM2AncestralDiscreteScheduler
      • KDPM2DiscreteScheduler
      • LMSDiscreteScheduler
      • PNDMScheduler
      • RePaintScheduler
      • ScoreSdeVeScheduler
      • ScoreSdeVpScheduler
      • UniPCMultistepScheduler
      • VQDiffusionScheduler
Powered by GitBook
On this page
  • DDIMScheduler
  • Tips
  • DDIMScheduler
  • DDIMSchedulerOutput
  1. API
  2. SCHEDULERS

DDIMScheduler

PreviousDDIMInverseSchedulerNextDDPMScheduler

Last updated 1 year ago

DDIMScheduler

(DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.

The abstract from the paper is:

Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10Γ— to 50Γ— faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.

The original codebase of this paper can be found at , and you can contact the author on .

Tips

The paper claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose:

πŸ§ͺ This is an experimental feature!

  1. rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR)

Copied

pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True)
  1. train a model with v_prediction (add the following argument to the or scripts)

Copied

--prediction_type="v_prediction"
  1. change the sampler to always start from the last timestep

Copied

pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
  1. rescale classifier-free guidance to prevent over-exposure

Copied

image = pipeline(prompt, guidance_rescale=0.7).images[0]

For example:

Copied

from diffusers import DiffusionPipeline, DDIMScheduler

pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
pipe.scheduler = DDIMScheduler.from_config(
    pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
)
pipe.to("cuda")

prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
image = pipeline(prompt, guidance_rescale=0.7).images[0]

DDIMScheduler

class diffusers.DDIMScheduler

( num_train_timesteps: int = 1000beta_start: float = 0.0001beta_end: float = 0.02beta_schedule: str = 'linear'trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = Noneclip_sample: bool = Trueset_alpha_to_one: bool = Truesteps_offset: int = 0prediction_type: str = 'epsilon'thresholding: bool = Falsedynamic_thresholding_ratio: float = 0.995clip_sample_range: float = 1.0sample_max_value: float = 1.0timestep_spacing: str = 'leading'rescale_betas_zero_snr: bool = False )

Parameters

  • num_train_timesteps (int, defaults to 1000) β€” The number of diffusion steps to train the model.

  • beta_start (float, defaults to 0.0001) β€” The starting beta value of inference.

  • beta_end (float, defaults to 0.02) β€” The final beta value.

  • beta_schedule (str, defaults to "linear") β€” The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from linear, scaled_linear, or squaredcos_cap_v2.

  • trained_betas (np.ndarray, optional) β€” Pass an array of betas directly to the constructor to bypass beta_start and beta_end.

  • clip_sample (bool, defaults to True) β€” Clip the predicted sample for numerical stability.

  • clip_sample_range (float, defaults to 1.0) β€” The maximum magnitude for sample clipping. Valid only when clip_sample=True.

  • set_alpha_to_one (bool, defaults to True) β€” Each diffusion step uses the alphas product value at that step and at the previous one. For the final step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, otherwise it uses the alpha value at step 0.

  • steps_offset (int, defaults to 0) β€” An offset added to the inference steps. You can use a combination of offset=1 and set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable Diffusion.

  • thresholding (bool, defaults to False) β€” Whether to use the β€œdynamic thresholding” method. This is unsuitable for latent-space diffusion models such as Stable Diffusion.

  • dynamic_thresholding_ratio (float, defaults to 0.995) β€” The ratio for the dynamic thresholding method. Valid only when thresholding=True.

  • sample_max_value (float, defaults to 1.0) β€” The threshold value for dynamic thresholding. Valid only when thresholding=True.

DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with non-Markovian guidance.

scale_model_input

( sample: FloatTensortimestep: typing.Optional[int] = None ) β†’ torch.FloatTensor

Parameters

  • sample (torch.FloatTensor) β€” The input sample.

  • timestep (int, optional) β€” The current timestep in the diffusion chain.

Returns

torch.FloatTensor

A scaled input sample.

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.

set_timesteps

( num_inference_steps: intdevice: typing.Union[str, torch.device] = None )

Parameters

  • num_inference_steps (int) β€” The number of diffusion steps used when generating samples with a pre-trained model.

Sets the discrete timesteps used for the diffusion chain (to be run before inference).

step

( model_output: FloatTensortimestep: intsample: FloatTensoreta: float = 0.0use_clipped_model_output: bool = Falsegenerator = Nonevariance_noise: typing.Optional[torch.FloatTensor] = Nonereturn_dict: bool = True ) β†’ ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple

Parameters

  • model_output (torch.FloatTensor) β€” The direct output from learned diffusion model.

  • timestep (float) β€” The current discrete timestep in the diffusion chain.

  • sample (torch.FloatTensor) β€” A current instance of a sample created by the diffusion process.

  • eta (float) β€” The weight of noise for added noise in diffusion step.

  • use_clipped_model_output (bool, defaults to False) β€” If True, computes β€œcorrected” model_output from the clipped predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no clipping has happened, β€œcorrected” model_output would coincide with the one provided as input and use_clipped_model_output has no effect.

  • generator (torch.Generator, optional) β€” A random number generator.

  • variance_noise (torch.FloatTensor) β€” Alternative to generating noise with generator by directly providing the noise for the variance itself. Useful for methods such as CycleDiffusion.

Returns

~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise).

DDIMSchedulerOutput

class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput

( prev_sample: FloatTensorpred_original_sample: typing.Optional[torch.FloatTensor] = None )

Parameters

  • prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€” Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the denoising loop.

  • pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€” The predicted denoised sample (x_{0}) based on the model output from the current timestep. pred_original_sample can be used to preview progress or for guidance.

Output class for the scheduler’s step function output.

prediction_type (str, defaults to epsilon, optional) β€” Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of paper).

timestep_spacing (str, defaults to "leading") β€” The way the timesteps should be scaled. Refer to Table 2 of the for more information.

rescale_betas_zero_snr (bool, defaults to False) β€” Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and dark samples instead of limiting it to samples with medium brightness. Loosely related to .

This model inherits from and . Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving.

return_dict (bool, optional, defaults to True) β€” Whether or not to return a or tuple.

If return_dict is True, is returned, otherwise a tuple is returned where the first element is the sample tensor.

🌍
🌍
Denoising Diffusion Implicit Models
ermongroup/ddim
tsong.me
Common Diffusion Noise Schedules and Sample Steps are Flawed
train_text_to_image.py
train_text_to_image_lora.py
<source>
Imagen Video
Common Diffusion Noise Schedules and Sample Steps are Flawed
--offset_noise
SchedulerMixin
ConfigMixin
<source>
<source>
<source>
DDIMSchedulerOutput
DDIMSchedulerOutput
<source>