Text-to-video
Last updated
Last updated
is by Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, Tieniu Tan.
The abstract from the paper is:
A diffusion probabilistic model (DPM), which constructs a forward diffusion process by gradually adding noise to data points and learns the reverse denoising process to generate new samples, has been shown to handle complex data distribution. Despite its recent success in image synthesis, applying DPMs to video generation is still challenging due to high-dimensional data spaces. Previous methods usually adopt a standard diffusion process, where frames in the same video clip are destroyed with independent noises, ignoring the content redundancy and temporal correlation. This work presents a decomposed diffusion process via resolving the per-frame noise into a base noise that is shared among all frames and a residual noise that varies along the time axis. The denoising pipeline employs two jointly-learned networks to match the noise decomposition accordingly. Experiments on various datasets confirm that our approach, termed as VideoFusion, surpasses both GAN-based and diffusion-based alternatives in high-quality video generation. We further show that our decomposed formulation can benefit from pre-trained image diffusion models and well-support text-conditioned video creation.
You can find additional information about Text-to-Video on the , , and try it out in a . Official checkpoints can be found at and .
text-to-video-ms-1.7b
Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps):
Copied
Diffusers supports different optimization techniques to improve the latency and memory footprint of a pipeline. Since videos are often more memory-heavy than images, we can enable CPU offloading and VAE slicing to keep the memory footprint at bay.
Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing:
Copied
It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above.
We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion:
Copied
Here are some sample outputs:
cerspense/zeroscope_v2_576w
& cerspense/zeroscope_v2_XL
Copied
Now the video can be upscaled:
Copied
Here are some sample outputs:
( vae: AutoencoderKLtext_encoder: CLIPTextModeltokenizer: CLIPTokenizerunet: UNet3DConditionModelscheduler: KarrasDiffusionSchedulers )
Parameters
tokenizer (CLIPTokenizer
) — A CLIPTokenizer
to tokenize text.
Pipeline for text-to-video generation.
__call__
Parameters
prompt (str
or List[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds
.
height (int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor
) — The height in pixels of the generated video.
width (int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor
) — The width in pixels of the generated video.
num_frames (int
, optional, defaults to 16) — The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds amounts to 2 seconds of video.
num_inference_steps (int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality videos at the expense of slower inference.
guidance_scale (float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the text prompt
at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1
.
negative_prompt (str
or List[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
).
num_images_per_prompt (int
, optional, defaults to 1) — The number of images to generate per prompt.
latents (torch.FloatTensor
, optional) — Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random generator
. Latents should be of shape (batch_size, num_channel, num_frames, height, width)
.
prompt_embeds (torch.FloatTensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt
input argument.
negative_prompt_embeds (torch.FloatTensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds
are generated from the negative_prompt
input argument.
output_type (str
, optional, defaults to "np"
) — The output format of the generated video. Choose between torch.FloatTensor
or np.array
.
callback (Callable
, optional) — A function that calls every callback_steps
steps during inference. The function is called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
.
callback_steps (int
, optional, defaults to 1) — The frequency at which the callback
function is called. If not specified, the callback is called at every step.
Returns
The call function to the pipeline for generation.
Examples:
Copied
disable_vae_slicing
( )
Disable sliced VAE decoding. If enable_vae_slicing
was previously enabled, this method will go back to computing decoding in one step.
disable_vae_tiling
( )
Disable tiled VAE decoding. If enable_vae_tiling
was previously enabled, this method will go back to computing decoding in one step.
enable_vae_slicing
( )
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_vae_tiling
( )
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
encode_prompt
( promptdevicenum_images_per_promptdo_classifier_free_guidancenegative_prompt = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonelora_scale: typing.Optional[float] = None )
Parameters
prompt (str
or List[str]
, optional) — prompt to be encoded device — (torch.device
): torch device
num_images_per_prompt (int
) — number of images that should be generated per prompt
do_classifier_free_guidance (bool
) — whether to use classifier free guidance or not
negative_prompt (str
or List[str]
, optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is less than 1
).
prompt_embeds (torch.FloatTensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt
input argument.
negative_prompt_embeds (torch.FloatTensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input argument.
lora_scale (float
, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
Encodes the prompt into text encoder hidden states.
( vae: AutoencoderKLtext_encoder: CLIPTextModeltokenizer: CLIPTokenizerunet: UNet3DConditionModelscheduler: KarrasDiffusionSchedulers )
Parameters
tokenizer (CLIPTokenizer
) — A CLIPTokenizer
to tokenize text.
Pipeline for text-guided video-to-video generation.
__call__
Parameters
prompt (str
or List[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds
.
video (List[np.ndarray]
or torch.FloatTensor
) — video
frames or tensor representing a video batch to be used as the starting point for the process. Can also accept video latents as image
, if passing latents directly, it will not be encoded again.
strength (float
, optional, defaults to 0.8) — Indicates extent to transform the reference video
. Must be between 0 and 1. video
is used as a starting point, adding more noise to it the larger the strength
. The number of denoising steps depends on the amount of noise initially added. When strength
is 1, added noise is maximum and the denoising process runs for the full number of iterations specified in num_inference_steps
. A value of 1 essentially ignores video
.
num_inference_steps (int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality videos at the expense of slower inference.
guidance_scale (float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the text prompt
at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1
.
negative_prompt (str
or List[str]
, optional) — The prompt or prompts to guide what to not include in video generation. If not defined, you need to pass negative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
).
latents (torch.FloatTensor
, optional) — Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random generator
. Latents should be of shape (batch_size, num_channel, num_frames, height, width)
.
prompt_embeds (torch.FloatTensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt
input argument.
negative_prompt_embeds (torch.FloatTensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds
are generated from the negative_prompt
input argument.
output_type (str
, optional, defaults to "np"
) — The output format of the generated video. Choose between torch.FloatTensor
or np.array
.
callback (Callable
, optional) — A function that calls every callback_steps
steps during inference. The function is called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
.
callback_steps (int
, optional, defaults to 1) — The frequency at which the callback
function is called. If not specified, the callback is called at every step.
Returns
The call function to the pipeline for generation.
Examples:
Copied
disable_vae_slicing
( )
Disable sliced VAE decoding. If enable_vae_slicing
was previously enabled, this method will go back to computing decoding in one step.
disable_vae_tiling
( )
Disable tiled VAE decoding. If enable_vae_tiling
was previously enabled, this method will go back to computing decoding in one step.
enable_vae_slicing
( )
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_vae_tiling
( )
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
encode_prompt
( promptdevicenum_images_per_promptdo_classifier_free_guidancenegative_prompt = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Nonelora_scale: typing.Optional[float] = None )
Parameters
prompt (str
or List[str]
, optional) — prompt to be encoded device — (torch.device
): torch device
num_images_per_prompt (int
) — number of images that should be generated per prompt
do_classifier_free_guidance (bool
) — whether to use classifier free guidance or not
negative_prompt (str
or List[str]
, optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is less than 1
).
prompt_embeds (torch.FloatTensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt
input argument.
negative_prompt_embeds (torch.FloatTensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input argument.
lora_scale (float
, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
Encodes the prompt into text encoder hidden states.
( frames: typing.Union[typing.List[numpy.ndarray], torch.FloatTensor] )
Parameters
frames (List[np.ndarray]
or torch.FloatTensor
) — List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels)
or as a torch
tensor. The length of the list denotes the video length (the number of frames).
Output class for text-to-video pipelines.
An astronaut riding a horse.
Darth vader surfing in waves.
Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320
and 1024x576
. One should first generate a video using the lower resolution checkpoint with , which can then be upscaled using and .
Darth vader surfing in waves.
vae () — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
text_encoder (CLIPTextModel
) — Frozen text-encoder ().
unet () — A to denoise the encoded video latents.
scheduler () — A scheduler to be used in combination with unet
to denoise the encoded image latents. Can be one of , , or .
This model inherits from . Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
( prompt: typing.Union[str, typing.List[str]] = Noneheight: typing.Optional[int] = Nonewidth: typing.Optional[int] = Nonenum_frames: int = 16num_inference_steps: int = 50guidance_scale: float = 9.0negative_prompt: typing.Union[str, typing.List[str], NoneType] = Noneeta: float = 0.0generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = Nonelatents: typing.Optional[torch.FloatTensor] = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Noneoutput_type: typing.Optional[str] = 'np'return_dict: bool = Truecallback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = Nonecallback_steps: int = 1cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None ) → or tuple
eta (float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) from the paper. Only applies to the , and is ignored in other schedulers.
generator (torch.Generator
or List[torch.Generator]
, optional) — A to make generation deterministic.
return_dict (bool
, optional, defaults to True
) — Whether or not to return a instead of a plain tuple.
cross_attention_kwargs (dict
, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in .
or tuple
If return_dict
is True
, is returned, otherwise a tuple
is returned where the first element is a list with the generated frames.
vae () — Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
text_encoder (CLIPTextModel
) — Frozen text-encoder ().
unet () — A to denoise the encoded video latents.
scheduler () — A scheduler to be used in combination with unet
to denoise the encoded image latents. Can be one of , , or .
This model inherits from . Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
( prompt: typing.Union[str, typing.List[str]] = Nonevideo: typing.Union[typing.List[numpy.ndarray], torch.FloatTensor] = Nonestrength: float = 0.6num_inference_steps: int = 50guidance_scale: float = 15.0negative_prompt: typing.Union[str, typing.List[str], NoneType] = Noneeta: float = 0.0generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = Nonelatents: typing.Optional[torch.FloatTensor] = Noneprompt_embeds: typing.Optional[torch.FloatTensor] = Nonenegative_prompt_embeds: typing.Optional[torch.FloatTensor] = Noneoutput_type: typing.Optional[str] = 'np'return_dict: bool = Truecallback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = Nonecallback_steps: int = 1cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None ) → or tuple
eta (float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) from the paper. Only applies to the , and is ignored in other schedulers.
generator (torch.Generator
or List[torch.Generator]
, optional) — A to make generation deterministic.
return_dict (bool
, optional, defaults to True
) — Whether or not to return a instead of a plain tuple.
cross_attention_kwargs (dict
, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in .
or tuple
If return_dict
is True
, is returned, otherwise a tuple
is returned where the first element is a list with the generated frames.