PNDM

PNDM

Pseudo Numerical methods for Diffusion Models on manifoldsarrow-up-right (PNDM) is by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao.

The abstract from the paper is:

Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules.

The original codebase can be found at luping-liu/PNDMarrow-up-right.

Make sure to check out the Schedulers guidearrow-up-right to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelinesarrow-up-right section to learn how to efficiently load the same components into multiple pipelines.

PNDMPipeline

class diffusers.PNDMPipeline

<source>arrow-up-right

( unet: UNet2DModelscheduler: PNDMScheduler )

Parameters

Pipeline for unconditional image generation.

This model inherits from DiffusionPipelinearrow-up-right. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

__call__

<source>arrow-up-right

( batch_size: int = 1num_inference_steps: int = 50generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = Noneoutput_type: typing.Optional[str] = 'pil'return_dict: bool = True**kwargs ) β†’ ImagePipelineOutputarrow-up-right or tuple

Parameters

  • batch_size (int, optional, defaults to 1) β€” The number of images to generate.

  • num_inference_steps (int, optional, defaults to 50) β€” The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.

  • generator (torch.Generator, optional) β€” A torch.Generatorarrow-up-right to make generation deterministic.

  • output_type (str, optional, defaults to "pil") β€” The output format of the generated image. Choose between PIL.Image or np.array.

  • return_dict (bool, optional, defaults to True) β€” Whether or not to return a ImagePipelineOutputarrow-up-right instead of a plain tuple.

Returns

ImagePipelineOutputarrow-up-right or tuple

If return_dict is True, ImagePipelineOutputarrow-up-right is returned, otherwise a tuple is returned where the first element is a list with the generated images.

The call function to the pipeline for generation.

Example:

Copied

ImagePipelineOutput

class diffusers.ImagePipelineOutput

<source>arrow-up-right

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] )

Parameters

  • images (List[PIL.Image.Image] or np.ndarray) β€” List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels).

Output class for image pipelines.

Last updated