Diffusers

Diffusers

🌍 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🌍 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions.

The library has three main components:

  • State-of-the-art diffusion pipelines for inference with just a few lines of code.

  • Interchangeable noise schedulers for balancing trade-offs between generation speed and quality.

  • Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.

Supported pipelines

Pipeline
Paper/Repository
Tasks

Unconditional Audio Generation

Unconditional Audio Generation

Unconditional Image Generation

Unconditional Image Generation

Image Generation

Image-to-Image Generation

Image-to-Image Generation

Image-to-Image Text-Guided Generation

Image-to-Image Text-Guided Generation

Text-Guided Image Inpainting

Text-to-Panorama Generation

Text-Guided Super Resolution Image-to-Image

Text-to-Image Generation

Text-Guided Image Inpainting

Text-Guided Super Resolution Image-to-Image

Stable unCLIP

Text-to-Image Generation

Stable unCLIP

Image-to-Image Text-Guided Generation

Last updated