Diffusers BOINC AI docs
Ctrlk
  • 🌍GET STARTED
  • 🌍TUTORIALS
  • 🌍USING DIFFUSERS
  • 🌍OPTIMIZATION/SPECIAL HARDWARE
  • 🌍CONCEPTUAL GUIDES
  • 🌍API
    • 🌍MAIN CLASSES
    • 🌍MODELS
    • 🌍PIPELINES
      • Overview
      • AltDiffusion
      • Attend-and-Excite
      • Audio Diffusion
      • AudioLDM
      • AudioLDM 2
      • AutoPipeline
      • Consistency Models
      • ControlNet
      • ControlNet with Stable Diffusion XL
      • Cycle Diffusion
      • Dance Diffusion
      • DDIM
      • DDPM
      • DeepFloyd IF
      • DiffEdit
      • DiT
      • IF
      • PaInstructPix2Pix
      • Kandinsky
      • Kandinsky 2.2
      • Latent Diffusionge
      • MultiDiffusion
      • MusicLDM
      • PaintByExample
      • Parallel Sampling of Diffusion Models
      • Pix2Pix Zero
      • PNDM
      • RePaint
      • Score SDE VE
      • Self-Attention Guidance
      • Semantic Guidance
      • Shap-E
      • Spectrogram Diffusion
      • 🌍STABLE DIFFUSION
      • Stable unCLIP
      • Stochastic Karras VE
      • Text-to-image model editing
      • Text-to-video
      • Text2Video-Zero
      • UnCLIP
      • Unconditional Latent Diffusion
      • UniDiffuser
      • Value-guided sampling
      • Versatile Diffusion
      • VQ Diffusion
      • Wuerstchen
    • 🌍SCHEDULERS
Powered by GitBook
On this page
  1. 🌍API

🌍PIPELINES

OverviewAltDiffusionAttend-and-ExciteAudio DiffusionAudioLDMAudioLDM 2AutoPipelineConsistency ModelsControlNetControlNet with Stable Diffusion XLCycle DiffusionDance DiffusionDDIMDDPMDeepFloyd IFDiffEditDiTIFPaInstructPix2PixKandinskyKandinsky 2.2Latent DiffusiongeMultiDiffusionMusicLDMPaintByExampleParallel Sampling of Diffusion ModelsPix2Pix ZeroPNDMRePaintScore SDE VESelf-Attention GuidanceSemantic GuidanceShap-ESpectrogram Diffusion🌍STABLE DIFFUSIONStable unCLIPStochastic Karras VEText-to-image model editingText-to-videoText2Video-ZeroUnCLIPUnconditional Latent DiffusionUniDiffuserValue-guided samplingVersatile DiffusionVQ DiffusionWuerstchen
PreviousControlNetNextOverview