Diffusers BOINC AI docs
  • 🌍GET STARTED
    • Diffusers
    • Quicktour
    • Effective and efficient diffusion
    • Installation
  • 🌍TUTORIALS
    • Overview
    • Understanding models and schedulers
    • AutoPipeline
    • Train a diffusion model
  • 🌍USING DIFFUSERS
    • 🌍LOADING & HUB
      • Overview
      • Load pipelines, models, and schedulers
      • Load and compare different schedulers
      • Load community pipelines
      • Load safetensors
      • Load different Stable Diffusion formats
      • Push files to the Hub
    • 🌍TASKS
      • Unconditional image generation
      • Text-to-image
      • Image-to-image
      • Inpainting
      • Depth-to-image
    • 🌍TECHNIQUES
      • Textual inversion
      • Distributed inference with multiple GPUs
      • Improve image quality with deterministic generation
      • Control image brightness
      • Prompt weighting
    • 🌍PIPELINES FOR INFERENCE
      • Overview
      • Stable Diffusion XL
      • ControlNet
      • Shap-E
      • DiffEdit
      • Distilled Stable Diffusion inference
      • Create reproducible pipelines
      • Community pipelines
      • How to contribute a community pipeline
    • 🌍TRAINING
      • Overview
      • Create a dataset for training
      • Adapt a model to a new task
      • Unconditional image generation
      • Textual Inversion
      • DreamBooth
      • Text-to-image
      • Low-Rank Adaptation of Large Language Models (LoRA)
      • ControlNet
      • InstructPix2Pix Training
      • Custom Diffusion
      • T2I-Adapters
    • 🌍TAKING DIFFUSERS BEYOND IMAGES
      • Other Modalities
  • 🌍OPTIMIZATION/SPECIAL HARDWARE
    • Overview
    • Memory and Speed
    • Torch2.0 support
    • Stable Diffusion in JAX/Flax
    • xFormers
    • ONNX
    • OpenVINO
    • Core ML
    • MPS
    • Habana Gaudi
    • Token Merging
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Controlled generation
    • How to contribute?
    • Diffusers' Ethical Guidelines
    • Evaluating Diffusion Models
  • 🌍API
    • 🌍MAIN CLASSES
      • Attention Processor
      • Diffusion Pipeline
      • Logging
      • Configuration
      • Outputs
      • Loaders
      • Utilities
      • VAE Image Processor
    • 🌍MODELS
      • Overview
      • UNet1DModel
      • UNet2DModel
      • UNet2DConditionModel
      • UNet3DConditionModel
      • VQModel
      • AutoencoderKL
      • AsymmetricAutoencoderKL
      • Tiny AutoEncoder
      • Transformer2D
      • Transformer Temporal
      • Prior Transformer
      • ControlNet
    • 🌍PIPELINES
      • Overview
      • AltDiffusion
      • Attend-and-Excite
      • Audio Diffusion
      • AudioLDM
      • AudioLDM 2
      • AutoPipeline
      • Consistency Models
      • ControlNet
      • ControlNet with Stable Diffusion XL
      • Cycle Diffusion
      • Dance Diffusion
      • DDIM
      • DDPM
      • DeepFloyd IF
      • DiffEdit
      • DiT
      • IF
      • PaInstructPix2Pix
      • Kandinsky
      • Kandinsky 2.2
      • Latent Diffusionge
      • MultiDiffusion
      • MusicLDM
      • PaintByExample
      • Parallel Sampling of Diffusion Models
      • Pix2Pix Zero
      • PNDM
      • RePaint
      • Score SDE VE
      • Self-Attention Guidance
      • Semantic Guidance
      • Shap-E
      • Spectrogram Diffusion
      • 🌍STABLE DIFFUSION
        • Overview
        • Text-to-image
        • Image-to-image
        • Inpainting
        • Depth-to-image
        • Image variation
        • Safe Stable Diffusion
        • Stable Diffusion 2
        • Stable Diffusion XL
        • Latent upscaler
        • Super-resolution
        • LDM3D Text-to-(RGB, Depth)
        • Stable Diffusion T2I-adapter
        • GLIGEN (Grounded Language-to-Image Generation)
      • Stable unCLIP
      • Stochastic Karras VE
      • Text-to-image model editing
      • Text-to-video
      • Text2Video-Zero
      • UnCLIP
      • Unconditional Latent Diffusion
      • UniDiffuser
      • Value-guided sampling
      • Versatile Diffusion
      • VQ Diffusion
      • Wuerstchen
    • 🌍SCHEDULERS
      • Overview
      • CMStochasticIterativeScheduler
      • DDIMInverseScheduler
      • DDIMScheduler
      • DDPMScheduler
      • DEISMultistepScheduler
      • DPMSolverMultistepInverse
      • DPMSolverMultistepScheduler
      • DPMSolverSDEScheduler
      • DPMSolverSinglestepScheduler
      • EulerAncestralDiscreteScheduler
      • EulerDiscreteScheduler
      • HeunDiscreteScheduler
      • IPNDMScheduler
      • KarrasVeScheduler
      • KDPM2AncestralDiscreteScheduler
      • KDPM2DiscreteScheduler
      • LMSDiscreteScheduler
      • PNDMScheduler
      • RePaintScheduler
      • ScoreSdeVeScheduler
      • ScoreSdeVpScheduler
      • UniPCMultistepScheduler
      • VQDiffusionScheduler
Powered by GitBook
On this page
  • Transformer2D
  • Transformer2DModel
  • Transformer2DModelOutput
  1. API
  2. MODELS

Transformer2D

PreviousTiny AutoEncoderNextTransformer Temporal

Last updated 1 year ago

Transformer2D

A Transformer model for image-like data from that is based on the introduced by Dosovitskiy et al. The accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs.

When the input is continuous:

  1. Project the input and reshape it to (batch_size, sequence_length, feature_dimension).

  2. Apply the Transformer blocks in the standard way.

  3. Reshape to image.

When the input is discrete:

It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked.

  1. Convert input (classes of latent pixels) to embeddings and apply positional embeddings.

  2. Apply the Transformer blocks in the standard way.

  3. Predict classes of unnoised image.

Transformer2DModel

class diffusers.Transformer2DModel

( num_attention_heads: int = 16attention_head_dim: int = 88in_channels: typing.Optional[int] = Noneout_channels: typing.Optional[int] = Nonenum_layers: int = 1dropout: float = 0.0norm_num_groups: int = 32cross_attention_dim: typing.Optional[int] = Noneattention_bias: bool = Falsesample_size: typing.Optional[int] = Nonenum_vector_embeds: typing.Optional[int] = Nonepatch_size: typing.Optional[int] = Noneactivation_fn: str = 'geglu'num_embeds_ada_norm: typing.Optional[int] = Noneuse_linear_projection: bool = Falseonly_cross_attention: bool = Falsedouble_self_attention: bool = Falseupcast_attention: bool = Falsenorm_type: str = 'layer_norm'norm_elementwise_affine: bool = Trueattention_type: str = 'default' )

Parameters

  • num_attention_heads (int, optional, defaults to 16) β€” The number of heads to use for multi-head attention.

  • attention_head_dim (int, optional, defaults to 88) β€” The number of channels in each head.

  • in_channels (int, optional) β€” The number of channels in the input and output (specify if the input is continuous).

  • num_layers (int, optional, defaults to 1) β€” The number of layers of Transformer blocks to use.

  • dropout (float, optional, defaults to 0.0) β€” The dropout probability to use.

  • cross_attention_dim (int, optional) β€” The number of encoder_hidden_states dimensions to use.

  • sample_size (int, optional) β€” The width of the latent images (specify if the input is discrete). This is fixed during training since it is used to learn a number of position embeddings.

  • num_vector_embeds (int, optional) β€” The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). Includes the class for the masked latent pixel.

  • activation_fn (str, optional, defaults to "geglu") β€” Activation function to use in feed-forward.

  • num_embeds_ada_norm ( int, optional) β€” The number of diffusion steps used during training. Pass if at least one of the norm_layers is AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are added to the hidden states.

    During inference, you can denoise for up to but not more steps than num_embeds_ada_norm.

  • attention_bias (bool, optional) β€” Configure if the TransformerBlocks attention should contain a bias parameter.

A 2D Transformer model for image-like data.

forward

( hidden_states: Tensorencoder_hidden_states: typing.Optional[torch.Tensor] = Nonetimestep: typing.Optional[torch.LongTensor] = Noneclass_labels: typing.Optional[torch.LongTensor] = Nonecross_attention_kwargs: typing.Dict[str, typing.Any] = Noneattention_mask: typing.Optional[torch.Tensor] = Noneencoder_attention_mask: typing.Optional[torch.Tensor] = Nonereturn_dict: bool = True )

Parameters

  • hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) β€” Input hidden_states.

  • encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) β€” Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention.

  • timestep ( torch.LongTensor, optional) β€” Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm.

  • class_labels ( torch.LongTensor of shape (batch size, num classes), optional) β€” Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in AdaLayerZeroNorm.

  • encoder_attention_mask ( torch.Tensor, optional) β€” Cross-attention mask applied to encoder_hidden_states. Two formats supported:

    • Mask (batch, sequence_length) True = keep, False = discard.

    • Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard.

    If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format above. This bias will be added to the cross-attention scores.

Transformer2DModelOutput

class diffusers.models.transformer_2d.Transformer2DModelOutput

( sample: FloatTensor )

Parameters

return_dict (bool, optional, defaults to True) β€” Whether or not to return a instead of a plain tuple.

The forward method.

sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if is discrete) β€” The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.

The output of .

🌍
🌍
CompVis
Vision Transformer
Transformer2DModel
<source>
<source>
UNet2DConditionOutput
Transformer2DModel
<source>
Transformer2DModel
Transformer2DModel