Diffusers BOINC AI docs
  • 🌍GET STARTED
    • Diffusers
    • Quicktour
    • Effective and efficient diffusion
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Understanding models and schedulers
    • AutoPipeline
    • Train a diffusion model
  • 🌍USING DIFFUSERS
    • 🌍LOADING & HUB
      • Overview
      • Load pipelines, models, and schedulers
      • Load and compare different schedulers
      • Load community pipelines
      • Load safetensors
      • Load different Stable Diffusion formats
      • Push files to the Hub
    • 🌍TASKS
      • Unconditional image generation
      • Text-to-image
      • Image-to-image
      • Inpainting
      • Depth-to-image
    • 🌍TECHNIQUES
      • Textual inversion
      • Distributed inference with multiple GPUs
      • Improve image quality with deterministic generation
      • Control image brightness
      • Prompt weighting
    • 🌍PIPELINES FOR INFERENCE
      • Overview
      • Stable Diffusion XL
      • ControlNet
      • Shap-E
      • DiffEdit
      • Distilled Stable Diffusion inference
      • Create reproducible pipelines
      • Community pipelines
      • How to contribute a community pipeline
    • 🌍TRAINING
      • Overview
      • Create a dataset for training
      • Adapt a model to a new task
      • Unconditional image generation
      • Textual Inversion
      • DreamBooth
      • Text-to-image
      • Low-Rank Adaptation of Large Language Models (LoRA)
      • ControlNet
      • InstructPix2Pix Training
      • Custom Diffusion
      • T2I-Adapters
    • 🌍TAKING DIFFUSERS BEYOND IMAGES
      • Other Modalities
  • 🌍OPTIMIZATION/SPECIAL HARDWARE
    • Overview
    • Memory and Speed
    • Torch2.0 support
    • Stable Diffusion in JAX/Flax
    • xFormers
    • ONNX
    • OpenVINO
    • Core ML
    • MPS
    • Habana Gaudi
    • Token Merging
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Controlled generation
    • How to contribute?
    • Diffusers' Ethical Guidelines
    • Evaluating Diffusion Models
  • 🌍API
    • 🌍MAIN CLASSES
      • Attention Processor
      • Diffusion Pipeline
      • Logging
      • Configuration
      • Outputs
      • Loaders
      • Utilities
      • VAE Image Processor
    • 🌍MODELS
      • Overview
      • UNet1DModel
      • UNet2DModel
      • UNet2DConditionModel
      • UNet3DConditionModel
      • VQModel
      • AutoencoderKL
      • AsymmetricAutoencoderKL
      • Tiny AutoEncoder
      • Transformer2D
      • Transformer Temporal
      • Prior Transformer
      • ControlNet
    • 🌍PIPELINES
      • Overview
      • AltDiffusion
      • Attend-and-Excite
      • Audio Diffusion
      • AudioLDM
      • AudioLDM 2
      • AutoPipeline
      • Consistency Models
      • ControlNet
      • ControlNet with Stable Diffusion XL
      • Cycle Diffusion
      • Dance Diffusion
      • DDIM
      • DDPM
      • DeepFloyd IF
      • DiffEdit
      • DiT
      • IF
      • PaInstructPix2Pix
      • Kandinsky
      • Kandinsky 2.2
      • Latent Diffusionge
      • MultiDiffusion
      • MusicLDM
      • PaintByExample
      • Parallel Sampling of Diffusion Models
      • Pix2Pix Zero
      • PNDM
      • RePaint
      • Score SDE VE
      • Self-Attention Guidance
      • Semantic Guidance
      • Shap-E
      • Spectrogram Diffusion
      • 🌍STABLE DIFFUSION
        • Overview
        • Text-to-image
        • Image-to-image
        • Inpainting
        • Depth-to-image
        • Image variation
        • Safe Stable Diffusion
        • Stable Diffusion 2
        • Stable Diffusion XL
        • Latent upscaler
        • Super-resolution
        • LDM3D Text-to-(RGB, Depth)
        • Stable Diffusion T2I-adapter
        • GLIGEN (Grounded Language-to-Image Generation)
      • Stable unCLIP
      • Stochastic Karras VE
      • Text-to-image model editing
      • Text-to-video
      • Text2Video-Zero
      • UnCLIP
      • Unconditional Latent Diffusion
      • UniDiffuser
      • Value-guided sampling
      • Versatile Diffusion
      • VQ Diffusion
      • Wuerstchen
    • 🌍SCHEDULERS
      • Overview
      • CMStochasticIterativeScheduler
      • DDIMInverseScheduler
      • DDIMScheduler
      • DDPMScheduler
      • DEISMultistepScheduler
      • DPMSolverMultistepInverse
      • DPMSolverMultistepScheduler
      • DPMSolverSDEScheduler
      • DPMSolverSinglestepScheduler
      • EulerAncestralDiscreteScheduler
      • EulerDiscreteScheduler
      • HeunDiscreteScheduler
      • IPNDMScheduler
      • KarrasVeScheduler
      • KDPM2AncestralDiscreteScheduler
      • KDPM2DiscreteScheduler
      • LMSDiscreteScheduler
      • PNDMScheduler
      • RePaintScheduler
      • ScoreSdeVeScheduler
      • ScoreSdeVpScheduler
      • UniPCMultistepScheduler
      • VQDiffusionScheduler
Powered by GitBook
On this page
  • DiffEdit
  • Tips
  • StableDiffusionDiffEditPipeline
  • StableDiffusionPipelineOutput
  1. API
  2. PIPELINES

DiffEdit

PreviousDeepFloyd IFNextDiT

Last updated 1 year ago

DiffEdit

is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.

The abstract from the paper is:

Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.

The original codebase can be found at , and you can try it out in this .

This pipeline was contributed by . ❀️

Tips

  • The pipeline can generate masks that can be fed into other inpainting pipelines.

  • In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to ) and a set of partially inverted latents (generated using ) must be provided as arguments when calling the pipeline to generate the final edited image.

  • The function exposes two prompt arguments, source_prompt and target_prompt that let you control the locations of the semantic edits in the final image to be generated. Let’s say, you wanted to translate from β€œcat” to β€œdog”. In this case, the edit direction will be β€œcat -> dog”. To reflect this in the generated mask, you simply have to set the embeddings related to the phrases including β€œcat” to source_prompt and β€œdog” to target_prompt.

  • When generating partially inverted latents using invert, assign a caption or text embedding describing the overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the source concept is sufficently descriptive to yield good results, but feel free to explore alternatives.

  • When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to the phrases including β€œcat” to negative_prompt and β€œdog” to prompt.

  • If you wanted to reverse the direction in the example above, i.e., β€œdog -> cat”, then it’s recommended to:

    • Swap the source_prompt and target_prompt in the arguments to generate_mask.

    • Change the input prompt in to include β€œdog”.

    • Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image.

  • The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the guide for more details.

StableDiffusionDiffEditPipeline

class diffusers.StableDiffusionDiffEditPipeline

( vae: AutoencoderKLtext_encoder: CLIPTextModeltokenizer: CLIPTokenizerunet: UNet2DConditionModelscheduler: KarrasDiffusionSchedulerssafety_checker: StableDiffusionSafetyCheckerfeature_extractor: CLIPImageProcessorinverse_scheduler: DDIMInverseSchedulerrequires_safety_checker: bool = True )

Parameters

  • tokenizer (CLIPTokenizer) β€” A CLIPTokenizer to tokenize text.

  • feature_extractor (CLIPImageProcessor) β€” A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker.

This is an experimental feature!

Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit.

The pipeline also inherits the following loading and saving methods:

generate_mask

( image: typing.Union[torch.FloatTensor, PIL.Image.Image] = Nonetarget_prompt: typing.Union[str, typing.List[str], NoneType] = Nonetarget_negative_prompt: typing.Union[str, typing.List[str], NoneType] = Nonetarget_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonetarget_negative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonesource_prompt: typing.Union[str, typing.List[str], NoneType] = Nonesource_negative_prompt: typing.Union[str, typing.List[str], NoneType] = Nonesource_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonesource_negative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonenum_maps_per_mask: typing.Optional[int] = 10mask_encode_strength: typing.Optional[float] = 0.5mask_thresholding_ratio: typing.Optional[float] = 3.0num_inference_steps: int = 50guidance_scale: float = 7.5generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = Noneoutput_type: typing.Optional[str] = 'np'cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None ) β†’ List[PIL.Image.Image] or np.array

Parameters

  • image (PIL.Image.Image) β€” Image or tensor representing an image batch to be used for computing the mask.

  • target_prompt (str or List[str], optional) β€” The prompt or prompts to guide semantic mask generation. If not defined, you need to pass prompt_embeds.

  • target_negative_prompt (str or List[str], optional) β€” The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1).

  • target_prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt input argument.

  • target_negative_prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds are generated from the negative_prompt input argument.

  • source_prompt (str or List[str], optional) β€” The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to pass source_prompt_embeds or source_image instead.

  • source_negative_prompt (str or List[str], optional) β€” The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you need to pass source_negative_prompt_embeds or source_image instead.

  • source_prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input argument.

  • source_negative_prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from source_negative_prompt input argument.

  • num_maps_per_mask (int, optional, defaults to 10) β€” The number of noise maps sampled to generate the semantic mask using DiffEdit.

  • mask_encode_strength (float, optional, defaults to 0.5) β€” The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 and 1.

  • mask_thresholding_ratio (float, optional, defaults to 3.0) β€” The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before mask binarization.

  • num_inference_steps (int, optional, defaults to 50) β€” The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.

  • guidance_scale (float, optional, defaults to 7.5) β€” A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.

  • output_type (str, optional, defaults to "pil") β€” The output format of the generated image. Choose between PIL.Image or np.array.

Returns

List[PIL.Image.Image] or np.array

When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor).

Generate a latent mask given a mask prompt, a target prompt, and an image.

Copied

>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> mask_prompt = "A bowl of fruits"
>>> prompt = "A bowl of pears"

>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents
>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0]

invert

( prompt: typing.Union[str, typing.List[str], NoneType] = Noneimage: typing.Union[torch.FloatTensor, PIL.Image.Image] = Nonenum_inference_steps: int = 50inpaint_strength: float = 0.8guidance_scale: float = 7.5negative_prompt: typing.Union[str, typing.List[str], NoneType] = Nonegenerator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonedecode_latents: bool = Falseoutput_type: typing.Optional[str] = 'pil'return_dict: bool = Truecallback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = Nonecallback_steps: typing.Optional[int] = 1cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = Nonelambda_auto_corr: float = 20.0lambda_kl: float = 20.0num_reg_steps: int = 0num_auto_corr_rolls: int = 5 )

Parameters

  • prompt (str or List[str], optional) β€” The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.

  • image (PIL.Image.Image) β€” Image or tensor representing an image batch to produce the inverted latents guided by prompt.

  • inpaint_strength (float, optional, defaults to 0.8) β€” Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When inpaint_strength is 1, the inversion process is run for the full number of iterations specified in num_inference_steps. image is used as a reference for the inversion process, and adding more noise increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs.

  • num_inference_steps (int, optional, defaults to 50) β€” The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.

  • guidance_scale (float, optional, defaults to 7.5) β€” A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.

  • negative_prompt (str or List[str], optional) β€” The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1).

  • prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt input argument.

  • negative_prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds are generated from the negative_prompt input argument.

  • decode_latents (bool, optional, defaults to False) β€” Whether or not to decode the inverted latents into a generated image. Setting this argument to True decodes all inverted latents for each timestep into a list of generated images.

  • output_type (str, optional, defaults to "pil") β€” The output format of the generated image. Choose between PIL.Image or np.array.

  • return_dict (bool, optional, defaults to True) β€” Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a plain tuple.

  • callback (Callable, optional) β€” A function that calls every callback_steps steps during inference. The function is called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).

  • callback_steps (int, optional, defaults to 1) β€” The frequency at which the callback function is called. If not specified, the callback is called at every step.

  • lambda_auto_corr (float, optional, defaults to 20.0) β€” Lambda parameter to control auto correction.

  • lambda_kl (float, optional, defaults to 20.0) β€” Lambda parameter to control Kullback-Leibler divergence output.

  • num_reg_steps (int, optional, defaults to 0) β€” Number of regularization loss steps.

  • num_auto_corr_rolls (int, optional, defaults to 5) β€” Number of auto correction roll steps.

Generate inverted latents given a prompt and image.

Copied

>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> prompt = "A bowl of fruits"

>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents

__call__

Parameters

  • prompt (str or List[str], optional) β€” The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.

  • mask_image (PIL.Image.Image) β€” Image or tensor representing an image batch to mask the generated image. White pixels in the mask are repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the expected shape would be (B, 1, H, W).

  • image_latents (PIL.Image.Image or torch.FloatTensor) β€” Partially noised image latents from the inversion process to be used as inputs for image generation.

  • inpaint_strength (float, optional, defaults to 0.8) β€” Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the denoising process is run on the masked area for the full number of iterations specified in num_inference_steps. image_latents is used as a reference for the masked area, and adding more noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs.

  • num_inference_steps (int, optional, defaults to 50) β€” The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.

  • guidance_scale (float, optional, defaults to 7.5) β€” A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.

  • negative_prompt (str or List[str], optional) β€” The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1).

  • num_images_per_prompt (int, optional, defaults to 1) β€” The number of images to generate per prompt.

  • latents (torch.FloatTensor, optional) β€” Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random generator.

  • prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt input argument.

  • negative_prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds are generated from the negative_prompt input argument.

  • output_type (str, optional, defaults to "pil") β€” The output format of the generated image. Choose between PIL.Image or np.array.

  • callback (Callable, optional) β€” A function that calls every callback_steps steps during inference. The function is called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).

  • callback_steps (int, optional, defaults to 1) β€” The frequency at which the callback function is called. If not specified, the callback is called at every step.

Returns

The call function to the pipeline for generation.

Copied

>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> mask_prompt = "A bowl of fruits"
>>> prompt = "A bowl of pears"

>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents
>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0]

disable_vae_slicing

( )

Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to computing decoding in one step.

disable_vae_tiling

( )

Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to computing decoding in one step.

enable_vae_slicing

( )

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

enable_vae_tiling

( )

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

encode_prompt

( promptdevicenum_images_per_promptdo_classifier_free_guidancenegative_prompt = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonelora_scale: typing.Optional[float] = None )

Parameters

  • prompt (str or List[str], optional) β€” prompt to be encoded device β€” (torch.device): torch device

  • num_images_per_prompt (int) β€” number of images that should be generated per prompt

  • do_classifier_free_guidance (bool) β€” whether to use classifier free guidance or not

  • negative_prompt (str or List[str], optional) β€” The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).

  • prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.

  • negative_prompt_embeds (torch.FloatTensor, optional) β€” Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.

  • lora_scale (float, optional) β€” A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

Encodes the prompt into text encoder hidden states.

StableDiffusionPipelineOutput

class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]nsfw_content_detected: typing.Optional[typing.List[bool]] )

Parameters

  • images (List[PIL.Image.Image] or np.ndarray) β€” List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels).

  • nsfw_content_detected (List[bool]) β€” List indicating whether the corresponding generated image contains β€œnot-safe-for-work” (nsfw) content or None if safety checking could not be performed.

Output class for Stable Diffusion pipelines.

vae () β€” Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.

text_encoder (CLIPTextModel) β€” Frozen text-encoder ().

unet () β€” A UNet2DConditionModel to denoise the encoded image latents.

scheduler () β€” A scheduler to be used in combination with unet to denoise the encoded image latents.

inverse_scheduler () β€” A scheduler to be used in combination with unet to fill in the unmasked part of the input latents.

safety_checker (StableDiffusionSafetyChecker) β€” Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the for more details about a model’s potential harms.

This model inherits from . Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

for loading textual inversion embeddings

for loading LoRA weights

for saving LoRA weights

generator (torch.Generator or List[torch.Generator], optional) β€” A to make generation deterministic.

cross_attention_kwargs (dict, optional) β€” A kwargs dictionary that if specified is passed along to the as defined in .

generator (torch.Generator, optional) β€” A to make generation deterministic.

cross_attention_kwargs (dict, optional) β€” A kwargs dictionary that if specified is passed along to the as defined in .

( prompt: typing.Union[str, typing.List[str], NoneType] = Nonemask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] = Noneimage_latents: typing.Union[torch.FloatTensor, PIL.Image.Image] = Noneinpaint_strength: typing.Optional[float] = 0.8num_inference_steps: int = 50guidance_scale: float = 7.5negative_prompt: typing.Union[str, typing.List[str], NoneType] = Nonenum_images_per_prompt: typing.Optional[int] = 1eta: float = 0.0generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = Nonelatents: typing.Optional[torch.FloatTensor] = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Noneoutput_type: typing.Optional[str] = 'pil'return_dict: bool = Truecallback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = Nonecallback_steps: int = 1cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None ) β†’ or tuple

eta (float, optional, defaults to 0.0) β€” Corresponds to parameter eta (Ξ·) from the paper. Only applies to the , and is ignored in other schedulers.

generator (torch.Generator, optional) β€” A to make generation deterministic.

return_dict (bool, optional, defaults to True) β€” Whether or not to return a instead of a plain tuple.

cross_attention_kwargs (dict, optional) β€” A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in .

or tuple

If return_dict is True, is returned, otherwise a tuple is returned where the first element is a list with the generated images and the second element is a list of bools indicating whether the corresponding generated image contains β€œnot-safe-for-work” (nsfw) content.

🌍
🌍
DiffEdit: Diffusion-based semantic image editing with mask guidance
Xiang-cd/DiffEdit-stable-diffusion
demo
clarencechen
generate_mask()
invert()
generate_mask()
invert()
DiffEdit
<source>
AutoencoderKL
clip-vit-large-patch14
UNet2DConditionModel
SchedulerMixin
DDIMInverseScheduler
model card
DiffusionPipeline
load_textual_inversion()
load_lora_weights()
save_lora_weights()
<source>
torch.Generator
AttnProcessor
self.processor
<source>
torch.Generator
AttnProcessor
self.processor
<source>
StableDiffusionPipelineOutput
DDIM
DDIMScheduler
torch.Generator
StableDiffusionPipelineOutput
self.processor
StableDiffusionPipelineOutput
StableDiffusionPipelineOutput
<source>
<source>
<source>
<source>
<source>
<source>