ControlNet with Stable Diffusion XL
ControlNet with Stable Diffusion XL
ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang and Maneesh Agrawala.
With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image thatโll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.
The abstract from the paper is:
We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.
You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the ๐Diffusers Hub organization, and browse community-trained checkpoints on the Hub.
๐งช Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve!
If you donโt see a checkpoint youโre interested in, you can train your own SDXL ControlNet with our training script.
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
StableDiffusionXLControlNetPipeline
class diffusers.StableDiffusionXLControlNetPipeline
( vae: AutoencoderKLtext_encoder: CLIPTextModeltext_encoder_2: CLIPTextModelWithProjectiontokenizer: CLIPTokenizertokenizer_2: CLIPTokenizerunet: UNet2DConditionModelcontrolnet: typing.Union[diffusers.models.controlnet.ControlNetModel, typing.List[diffusers.models.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnet.ControlNetModel], diffusers.pipelines.controlnet.multicontrolnet.MultiControlNetModel]scheduler: KarrasDiffusionSchedulersforce_zeros_for_empty_prompt: bool = Trueadd_watermarker: typing.Optional[bool] = None )
Parameters
- vae (AutoencoderKL) โ Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. 
- text_encoder ( - CLIPTextModel) โ Frozen text-encoder (clip-vit-large-patch14).
- text_encoder_2 ( - CLIPTextModelWithProjection) โ Second frozen text-encoder (laion/CLIP-ViT-bigG-14-laion2B-39B-b160k).
- tokenizer ( - CLIPTokenizer) โ A- CLIPTokenizerto tokenize text.
- tokenizer_2 ( - CLIPTokenizer) โ A- CLIPTokenizerto tokenize text.
- unet (UNet2DConditionModel) โ A - UNet2DConditionModelto denoise the encoded image latents.
- controlnet (ControlNetModel or - List[ControlNetModel]) โ Provides additional conditioning to the- unetduring the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning.
- scheduler (SchedulerMixin) โ A scheduler to be used in combination with - unetto denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
- force_zeros_for_empty_prompt ( - bool, optional, defaults to- "True") โ Whether the negative prompt embeddings should always be set to 0. Also see the config of- stabilityai/stable-diffusion-xl-base-1-0.
- add_watermarker ( - bool, optional) โ Whether to use the invisible_watermark library to watermark output images. If not defined, it defaults to- Trueif the package is installed; otherwise no watermarker is used.
Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
The pipeline also inherits the following loading methods:
- load_textual_inversion() for loading textual inversion embeddings 
- loaders.LoraLoaderMixin.load_lora_weights() for loading LoRA weights 
- loaders.FromSingleFileMixin.from_single_file() for loading - .ckptfiles
__call__
( prompt: typing.Union[str, typing.List[str]] = Noneprompt_2: typing.Union[str, typing.List[str], NoneType] = Noneimage: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = Noneheight: typing.Optional[int] = Nonewidth: typing.Optional[int] = Nonenum_inference_steps: int = 50guidance_scale: float = 5.0negative_prompt: typing.Union[str, typing.List[str], NoneType] = Nonenegative_prompt_2: typing.Union[str, typing.List[str], NoneType] = Nonenum_images_per_prompt: typing.Optional[int] = 1eta: float = 0.0generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = Nonelatents: typing.Optional[torch.FloatTensor] = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonepooled_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = Noneoutput_type: typing.Optional[str] = 'pil'return_dict: bool = Truecallback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = Nonecallback_steps: int = 1cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = Nonecontrolnet_conditioning_scale: typing.Union[float, typing.List[float]] = 1.0guess_mode: bool = Falsecontrol_guidance_start: typing.Union[float, typing.List[float]] = 0.0control_guidance_end: typing.Union[float, typing.List[float]] = 1.0original_size: typing.Tuple[int, int] = Nonecrops_coords_top_left: typing.Tuple[int, int] = (0, 0)target_size: typing.Tuple[int, int] = Nonenegative_original_size: typing.Union[typing.Tuple[int, int], NoneType] = Nonenegative_crops_coords_top_left: typing.Tuple[int, int] = (0, 0)negative_target_size: typing.Union[typing.Tuple[int, int], NoneType] = None ) โ StableDiffusionPipelineOutput or tuple
Parameters
- prompt ( - stror- List[str], optional) โ The prompt or prompts to guide image generation. If not defined, you need to pass- prompt_embeds.
- prompt_2 ( - stror- List[str], optional) โ The prompt or prompts to be sent to- tokenizer_2and- text_encoder_2. If not defined,- promptis used in both text-encoders.
- image ( - torch.FloatTensor,- PIL.Image.Image,- np.ndarray,- List[torch.FloatTensor],- List[PIL.Image.Image],- List[np.ndarray], โ- List[List[torch.FloatTensor]],- List[List[np.ndarray]]or- List[List[PIL.Image.Image]]): The ControlNet input condition to provide guidance to the- unetfor generation. If the type is specified as- torch.FloatTensor, it is passed to ControlNet as is.- PIL.Image.Imagecan also be accepted as an image. The dimensions of the output image defaults to- imageโs dimensions. If height and/or width are passed,- imageis resized accordingly. If multiple ControlNets are specified in- init, images must be passed as a list such that each element of the list can be correctly batched for input to a single ControlNet.
- height ( - int, optional, defaults to- self.unet.config.sample_size * self.vae_scale_factor) โ The height in pixels of the generated image. Anything below 512 pixels wonโt work well for stabilityai/stable-diffusion-xl-base-1.0 and checkpoints that are not specifically fine-tuned on low resolutions.
- width ( - int, optional, defaults to- self.unet.config.sample_size * self.vae_scale_factor) โ The width in pixels of the generated image. Anything below 512 pixels wonโt work well for stabilityai/stable-diffusion-xl-base-1.0 and checkpoints that are not specifically fine-tuned on low resolutions.
- num_inference_steps ( - int, optional, defaults to 50) โ The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
- guidance_scale ( - float, optional, defaults to 5.0) โ A higher guidance scale value encourages the model to generate images closely linked to the text- promptat the expense of lower image quality. Guidance scale is enabled when- guidance_scale > 1.
- negative_prompt ( - stror- List[str], optional) โ The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass- negative_prompt_embedsinstead. Ignored when not using guidance (- guidance_scale < 1).
- negative_prompt_2 ( - stror- List[str], optional) โ The prompt or prompts to guide what to not include in image generation. This is sent to- tokenizer_2and- text_encoder_2. If not defined,- negative_promptis used in both text-encoders.
- num_images_per_prompt ( - int, optional, defaults to 1) โ The number of images to generate per prompt.
- eta ( - float, optional, defaults to 0.0) โ Corresponds to parameter eta (ฮท) from the DDIM paper. Only applies to the DDIMScheduler, and is ignored in other schedulers.
- generator ( - torch.Generatoror- List[torch.Generator], optional) โ A- torch.Generatorto make generation deterministic.
- latents ( - torch.FloatTensor, optional) โ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random- generator.
- prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the- promptinput argument.
- negative_prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided,- negative_prompt_embedsare generated from the- negative_promptinput argument.
- pooled_prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, pooled text embeddings are generated from- promptinput argument.
- negative_pooled_prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, pooled- negative_prompt_embedsare generated from- negative_promptinput argument.
- output_type ( - str, optional, defaults to- "pil") โ The output format of the generated image. Choose between- PIL.Imageor- np.array.
- return_dict ( - bool, optional, defaults to- True) โ Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
- callback ( - Callable, optional) โ A function that calls every- callback_stepssteps during inference. The function is called with the following arguments:- callback(step: int, timestep: int, latents: torch.FloatTensor).
- callback_steps ( - int, optional, defaults to 1) โ The frequency at which the- callbackfunction is called. If not specified, the callback is called at every step.
- cross_attention_kwargs ( - dict, optional) โ A kwargs dictionary that if specified is passed along to the- AttentionProcessoras defined in- self.processor.
- controlnet_conditioning_scale ( - floator- List[float], optional, defaults to 1.0) โ The outputs of the ControlNet are multiplied by- controlnet_conditioning_scalebefore they are added to the residual in the original- unet. If multiple ControlNets are specified in- init, you can set the corresponding scale as a list.
- guess_mode ( - bool, optional, defaults to- False) โ The ControlNet encoder tries to recognize the content of the input image even if you remove all prompts. A- guidance_scalevalue between 3.0 and 5.0 is recommended.
- control_guidance_start ( - floator- List[float], optional, defaults to 0.0) โ The percentage of total steps at which the ControlNet starts applying.
- control_guidance_end ( - floator- List[float], optional, defaults to 1.0) โ The percentage of total steps at which the ControlNet stops applying.
- original_size ( - Tuple[int], optional, defaults to (1024, 1024)) โ If- original_sizeis not the same as- target_sizethe image will appear to be down- or upsampled.- original_sizedefaults to- (width, height)if not specified. Part of SDXLโs micro-conditioning as explained in section 2.2 of https://huggingface.co/papers/2307.01952.
- crops_coords_top_left ( - Tuple[int], optional, defaults to (0, 0)) โ- crops_coords_top_leftcan be used to generate an image that appears to be โcroppedโ from the position- crops_coords_top_leftdownwards. Favorable, well-centered images are usually achieved by setting- crops_coords_top_leftto (0, 0). Part of SDXLโs micro-conditioning as explained in section 2.2 of https://huggingface.co/papers/2307.01952.
- target_size ( - Tuple[int], optional, defaults to (1024, 1024)) โ For most cases,- target_sizeshould be set to the desired height and width of the generated image. If not specified it will default to- (width, height). Part of SDXLโs micro-conditioning as explained in section 2.2 of https://huggingface.co/papers/2307.01952.
- negative_original_size ( - Tuple[int], optional, defaults to (1024, 1024)) โ To negatively condition the generation process based on a specific image resolution. Part of SDXLโs micro-conditioning as explained in section 2.2 of https://huggingface.co/papers/2307.01952. For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- negative_crops_coords_top_left ( - Tuple[int], optional, defaults to (0, 0)) โ To negatively condition the generation process based on a specific crop coordinates. Part of SDXLโs micro-conditioning as explained in section 2.2 of https://huggingface.co/papers/2307.01952. For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- negative_target_size ( - Tuple[int], optional, defaults to (1024, 1024)) โ To negatively condition the generation process based on a target image resolution. It should be as same as the- target_sizefor most cases. Part of SDXLโs micro-conditioning as explained in section 2.2 of https://huggingface.co/papers/2307.01952. For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
Returns
StableDiffusionPipelineOutput or tuple
If return_dict is True, StableDiffusionPipelineOutput is returned, otherwise a tuple is returned containing the output images.
The call function to the pipeline for generation.
Examples:
Copied
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch
>>> import cv2
>>> from PIL import Image
>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
>>> negative_prompt = "low quality, bad quality, sketches"
>>> # download an image
>>> image = load_image(
...     "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
... )
>>> # initialize the models and pipeline
>>> controlnet_conditioning_scale = 0.5  # recommended for good generalization
>>> controlnet = ControlNetModel.from_pretrained(
...     "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()
>>> # get canny image
>>> image = np.array(image)
>>> image = cv2.Canny(image, 100, 200)
>>> image = image[:, :, None]
>>> image = np.concatenate([image, image, image], axis=2)
>>> canny_image = Image.fromarray(image)
>>> # generate image
>>> image = pipe(
...     prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image
... ).images[0]disable_vae_slicing
( )
Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to computing decoding in one step.
disable_vae_tiling
( )
Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to computing decoding in one step.
enable_vae_slicing
( )
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_vae_tiling
( )
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
encode_prompt
( prompt: strprompt_2: typing.Optional[str] = Nonedevice: typing.Optional[torch.device] = Nonenum_images_per_prompt: int = 1do_classifier_free_guidance: bool = Truenegative_prompt: typing.Optional[str] = Nonenegative_prompt_2: typing.Optional[str] = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonepooled_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonelora_scale: typing.Optional[float] = None )
Parameters
- prompt ( - stror- List[str], optional) โ prompt to be encoded
- prompt_2 ( - stror- List[str], optional) โ The prompt or prompts to be sent to the- tokenizer_2and- text_encoder_2. If not defined,- promptis used in both text-encoders device โ (- torch.device): torch device
- num_images_per_prompt ( - int) โ number of images that should be generated per prompt
- do_classifier_free_guidance ( - bool) โ whether to use classifier free guidance or not
- negative_prompt ( - stror- List[str], optional) โ The prompt or prompts not to guide the image generation. If not defined, one has to pass- negative_prompt_embedsinstead. Ignored when not using guidance (i.e., ignored if- guidance_scaleis less than- 1).
- negative_prompt_2 ( - stror- List[str], optional) โ The prompt or prompts not to guide the image generation to be sent to- tokenizer_2and- text_encoder_2. If not defined,- negative_promptis used in both text-encoders
- prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from- promptinput argument.
- negative_prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from- negative_promptinput argument.
- pooled_prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from- promptinput argument.
- negative_pooled_prompt_embeds ( - torch.FloatTensor, optional) โ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from- negative_promptinput argument.
- lora_scale ( - float, optional) โ A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
Encodes the prompt into text encoder hidden states.
StableDiffusionPipelineOutput
class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput
( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]nsfw_content_detected: typing.Optional[typing.List[bool]] )
Parameters
- images ( - List[PIL.Image.Image]or- np.ndarray) โ List of denoised PIL images of length- batch_sizeor NumPy array of shape- (batch_size, height, width, num_channels).
- nsfw_content_detected ( - List[bool]) โ List indicating whether the corresponding generated image contains โnot-safe-for-workโ (nsfw) content or- Noneif safety checking could not be performed.
Output class for Stable Diffusion pipelines.
Last updated
