AutoPipeline
AutoPipeline
๐ Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If youโre new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if youโre using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively.
The AutoPipeline class is designed to simplify the variety of pipelines in ๐ Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name.
Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting.
This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights.
Choose an AutoPipeline for your task
Start by picking a checkpoint. For example, if youโre interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image:
Copied
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune"
image = pipeline(prompt, num_inference_steps=25).images[0]
Under the hood, AutoPipelineForText2Image:
automatically detects a
"stable-diffusion"class from themodel_index.jsonfileloads the corresponding text-to-image
StableDiffusionPiplinebased on the"stable-diffusion"class name
Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and itโll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image:
Copied

And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way:
Copied

If you try to load an unsupported checkpoint, itโll throw an error:
Copied
Use multiple pipelines
For some workflows or if youโre loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if youโre using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost.
The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image:
Copied
Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline:
Copied
If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline:
Copied
You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument:
Copied
Last updated