Load community pipelines

Load community pipelines

Open In ColabOpen In Studio Lab

Community pipelines are any DiffusionPipelinearrow-up-right class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipelinearrow-up-right corresponds to the Text-to-Image Generation with ControlNet Conditioningarrow-up-right paper). They provide additional functionality or extend the original implementation of a pipeline.

There are many cool community pipelines like Speech to Imagearrow-up-right or Composable Stable Diffusionarrow-up-right, and you can find all the official community pipelines herearrow-up-right.

To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipelinearrow-up-right and the pipeline weights and components from google/ddpm-cifar10-32arrow-up-right:

πŸ”’ By loading a community pipeline from the BOINC AI Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically!

Copied

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True
)

Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusionarrow-up-right pipeline, and you can pass the CLIP model components directly to it:

Copied

from diffusers import DiffusionPipeline
from transformers import CLIPImageProcessor, CLIPModel

clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"

feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id)
clip_model = CLIPModel.from_pretrained(clip_model_id)

pipeline = DiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    custom_pipeline="clip_guided_stable_diffusion",
    clip_model=clip_model,
    feature_extractor=feature_extractor,
    use_safetensors=True,
)

For more information about community pipelines, take a look at the Community pipelinesarrow-up-right guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipelinearrow-up-right guide!

Last updated