Diffusers BOINC AI docs
  • 🌍GET STARTED
    • Diffusers
    • Quicktour
    • Effective and efficient diffusion
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Understanding models and schedulers
    • AutoPipeline
    • Train a diffusion model
  • 🌍USING DIFFUSERS
    • 🌍LOADING & HUB
      • Overview
      • Load pipelines, models, and schedulers
      • Load and compare different schedulers
      • Load community pipelines
      • Load safetensors
      • Load different Stable Diffusion formats
      • Push files to the Hub
    • 🌍TASKS
      • Unconditional image generation
      • Text-to-image
      • Image-to-image
      • Inpainting
      • Depth-to-image
    • 🌍TECHNIQUES
      • Textual inversion
      • Distributed inference with multiple GPUs
      • Improve image quality with deterministic generation
      • Control image brightness
      • Prompt weighting
    • 🌍PIPELINES FOR INFERENCE
      • Overview
      • Stable Diffusion XL
      • ControlNet
      • Shap-E
      • DiffEdit
      • Distilled Stable Diffusion inference
      • Create reproducible pipelines
      • Community pipelines
      • How to contribute a community pipeline
    • 🌍TRAINING
      • Overview
      • Create a dataset for training
      • Adapt a model to a new task
      • Unconditional image generation
      • Textual Inversion
      • DreamBooth
      • Text-to-image
      • Low-Rank Adaptation of Large Language Models (LoRA)
      • ControlNet
      • InstructPix2Pix Training
      • Custom Diffusion
      • T2I-Adapters
    • 🌍TAKING DIFFUSERS BEYOND IMAGES
      • Other Modalities
  • 🌍OPTIMIZATION/SPECIAL HARDWARE
    • Overview
    • Memory and Speed
    • Torch2.0 support
    • Stable Diffusion in JAX/Flax
    • xFormers
    • ONNX
    • OpenVINO
    • Core ML
    • MPS
    • Habana Gaudi
    • Token Merging
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Controlled generation
    • How to contribute?
    • Diffusers' Ethical Guidelines
    • Evaluating Diffusion Models
  • 🌍API
    • 🌍MAIN CLASSES
      • Attention Processor
      • Diffusion Pipeline
      • Logging
      • Configuration
      • Outputs
      • Loaders
      • Utilities
      • VAE Image Processor
    • 🌍MODELS
      • Overview
      • UNet1DModel
      • UNet2DModel
      • UNet2DConditionModel
      • UNet3DConditionModel
      • VQModel
      • AutoencoderKL
      • AsymmetricAutoencoderKL
      • Tiny AutoEncoder
      • Transformer2D
      • Transformer Temporal
      • Prior Transformer
      • ControlNet
    • 🌍PIPELINES
      • Overview
      • AltDiffusion
      • Attend-and-Excite
      • Audio Diffusion
      • AudioLDM
      • AudioLDM 2
      • AutoPipeline
      • Consistency Models
      • ControlNet
      • ControlNet with Stable Diffusion XL
      • Cycle Diffusion
      • Dance Diffusion
      • DDIM
      • DDPM
      • DeepFloyd IF
      • DiffEdit
      • DiT
      • IF
      • PaInstructPix2Pix
      • Kandinsky
      • Kandinsky 2.2
      • Latent Diffusionge
      • MultiDiffusion
      • MusicLDM
      • PaintByExample
      • Parallel Sampling of Diffusion Models
      • Pix2Pix Zero
      • PNDM
      • RePaint
      • Score SDE VE
      • Self-Attention Guidance
      • Semantic Guidance
      • Shap-E
      • Spectrogram Diffusion
      • 🌍STABLE DIFFUSION
        • Overview
        • Text-to-image
        • Image-to-image
        • Inpainting
        • Depth-to-image
        • Image variation
        • Safe Stable Diffusion
        • Stable Diffusion 2
        • Stable Diffusion XL
        • Latent upscaler
        • Super-resolution
        • LDM3D Text-to-(RGB, Depth)
        • Stable Diffusion T2I-adapter
        • GLIGEN (Grounded Language-to-Image Generation)
      • Stable unCLIP
      • Stochastic Karras VE
      • Text-to-image model editing
      • Text-to-video
      • Text2Video-Zero
      • UnCLIP
      • Unconditional Latent Diffusion
      • UniDiffuser
      • Value-guided sampling
      • Versatile Diffusion
      • VQ Diffusion
      • Wuerstchen
    • 🌍SCHEDULERS
      • Overview
      • CMStochasticIterativeScheduler
      • DDIMInverseScheduler
      • DDIMScheduler
      • DDPMScheduler
      • DEISMultistepScheduler
      • DPMSolverMultistepInverse
      • DPMSolverMultistepScheduler
      • DPMSolverSDEScheduler
      • DPMSolverSinglestepScheduler
      • EulerAncestralDiscreteScheduler
      • EulerDiscreteScheduler
      • HeunDiscreteScheduler
      • IPNDMScheduler
      • KarrasVeScheduler
      • KDPM2AncestralDiscreteScheduler
      • KDPM2DiscreteScheduler
      • LMSDiscreteScheduler
      • PNDMScheduler
      • RePaintScheduler
      • ScoreSdeVeScheduler
      • ScoreSdeVpScheduler
      • UniPCMultistepScheduler
      • VQDiffusionScheduler
Powered by GitBook
On this page
  • Attention Processor
  • AttnProcessor
  • AttnProcessor2_0
  • LoRAAttnProcessor
  • LoRAAttnProcessor2_0
  • CustomDiffusionAttnProcessor
  • AttnAddedKVProcessor
  • AttnAddedKVProcessor2_0
  • LoRAAttnAddedKVProcessor
  • XFormersAttnProcessor
  • LoRAXFormersAttnProcessor
  • CustomDiffusionXFormersAttnProcessor
  • SlicedAttnProcessor
  • SlicedAttnAddedKVProcessor
  1. API
  2. MAIN CLASSES

Attention Processor

PreviousMAIN CLASSESNextDiffusion Pipeline

Last updated 1 year ago

Attention Processor

An attention processor is a class for applying different types of attention mechanisms.

AttnProcessor

class diffusers.models.attention_processor.AttnProcessor

( )

Default processor for performing attention-related computations.

AttnProcessor2_0

class diffusers.models.attention_processor.AttnProcessor2_0

( )

Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).

LoRAAttnProcessor

class diffusers.models.attention_processor.LoRAAttnProcessor

( hidden_sizecross_attention_dim = Nonerank = 4network_alpha = None**kwargs )

Parameters

  • hidden_size (int, optional) β€” The hidden size of the attention layer.

  • cross_attention_dim (int, optional) β€” The number of channels in the encoder_hidden_states.

  • rank (int, defaults to 4) β€” The dimension of the LoRA update matrices.

  • network_alpha (int, optional) β€” Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs.

Processor for implementing the LoRA attention mechanism.

LoRAAttnProcessor2_0

class diffusers.models.attention_processor.LoRAAttnProcessor2_0

( hidden_sizecross_attention_dim = Nonerank = 4network_alpha = None**kwargs )

Parameters

  • hidden_size (int) β€” The hidden size of the attention layer.

  • cross_attention_dim (int, optional) β€” The number of channels in the encoder_hidden_states.

  • rank (int, defaults to 4) β€” The dimension of the LoRA update matrices.

  • network_alpha (int, optional) β€” Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs.

Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product attention.

CustomDiffusionAttnProcessor

class diffusers.models.attention_processor.CustomDiffusionAttnProcessor

( train_kv = Truetrain_q_out = Truehidden_size = Nonecross_attention_dim = Noneout_bias = Truedropout = 0.0 )

Parameters

  • train_kv (bool, defaults to True) β€” Whether to newly train the key and value matrices corresponding to the text features.

  • train_q_out (bool, defaults to True) β€” Whether to newly train query matrices corresponding to the latent image features.

  • hidden_size (int, optional, defaults to None) β€” The hidden size of the attention layer.

  • cross_attention_dim (int, optional, defaults to None) β€” The number of channels in the encoder_hidden_states.

  • out_bias (bool, defaults to True) β€” Whether to include the bias parameter in train_q_out.

  • dropout (float, optional, defaults to 0.0) β€” The dropout probability to use.

Processor for implementing attention for the Custom Diffusion method.

AttnAddedKVProcessor

class diffusers.models.attention_processor.AttnAddedKVProcessor

( )

Processor for performing attention-related computations with extra learnable key and value matrices for the text encoder.

AttnAddedKVProcessor2_0

class diffusers.models.attention_processor.AttnAddedKVProcessor2_0

( )

Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra learnable key and value matrices for the text encoder.

LoRAAttnAddedKVProcessor

class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor

( hidden_sizecross_attention_dim = Nonerank = 4network_alpha = None )

Parameters

  • hidden_size (int, optional) β€” The hidden size of the attention layer.

  • cross_attention_dim (int, optional, defaults to None) β€” The number of channels in the encoder_hidden_states.

  • rank (int, defaults to 4) β€” The dimension of the LoRA update matrices.

Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text encoder.

XFormersAttnProcessor

class diffusers.models.attention_processor.XFormersAttnProcessor

( attention_op: typing.Optional[typing.Callable] = None )

Parameters

Processor for implementing memory efficient attention using xFormers.

LoRAXFormersAttnProcessor

class diffusers.models.attention_processor.LoRAXFormersAttnProcessor

( hidden_sizecross_attention_dimrank = 4attention_op: typing.Optional[typing.Callable] = Nonenetwork_alpha = None**kwargs )

Parameters

  • hidden_size (int, optional) β€” The hidden size of the attention layer.

  • cross_attention_dim (int, optional) β€” The number of channels in the encoder_hidden_states.

  • rank (int, defaults to 4) β€” The dimension of the LoRA update matrices.

  • network_alpha (int, optional) β€” Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs.

Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers.

CustomDiffusionXFormersAttnProcessor

class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor

( train_kv = Truetrain_q_out = Falsehidden_size = Nonecross_attention_dim = Noneout_bias = Truedropout = 0.0attention_op: typing.Optional[typing.Callable] = None )

Parameters

  • train_kv (bool, defaults to True) β€” Whether to newly train the key and value matrices corresponding to the text features.

  • train_q_out (bool, defaults to True) β€” Whether to newly train query matrices corresponding to the latent image features.

  • hidden_size (int, optional, defaults to None) β€” The hidden size of the attention layer.

  • cross_attention_dim (int, optional, defaults to None) β€” The number of channels in the encoder_hidden_states.

  • out_bias (bool, defaults to True) β€” Whether to include the bias parameter in train_q_out.

  • dropout (float, optional, defaults to 0.0) β€” The dropout probability to use.

Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method.

SlicedAttnProcessor

class diffusers.models.attention_processor.SlicedAttnProcessor

( slice_size )

Parameters

  • slice_size (int, optional) β€” The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and attention_head_dim must be a multiple of the slice_size.

Processor for implementing sliced attention.

SlicedAttnAddedKVProcessor

class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor

( slice_size )

Parameters

  • slice_size (int, optional) β€” The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and attention_head_dim must be a multiple of the slice_size.

Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder.

attention_op (Callable, optional, defaults to None) β€” The base to use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator.

attention_op (Callable, optional, defaults to None) β€” The base to use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator.

attention_op (Callable, optional, defaults to None) β€” The base to use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator.

🌍
🌍
<source>
<source>
<source>
<source>
<source>
<source>
<source>
<source>
<source>
operator
<source>
operator
<source>
operator
<source>
<source>