Unconditional image generation
Last updated
Last updated
Unconditional image generation is not conditioned on any text or images, unlike text- or image-to-image models. It only generates images that resemble its training data distribution.
This guide will show you how to train an unconditional image generation model on existing datasets as well as your own custom dataset. All the training scripts for unconditional image generation can be found if youβre interested in learning more about the training details.
Before running the script, make sure you install the libraryβs training dependencies:
Copied
Next, initialize an π environment with:
Copied
To setup a default π Accelerate environment without choosing any configurations:
Copied
Or if your environment doesnβt support an interactive shell like a notebook, you can use:
Copied
You can upload your model on the Hub by adding the following argument to the training script:
Copied
It is a good idea to regularly save checkpoints in case anything happens during training. To save a checkpoint, pass the following argument to the training script:
Copied
The full training state is saved in a subfolder in the output_dir
every 500 steps, which allows you to load a checkpoint and resume training if you pass the --resume_from_checkpoint
argument to the training script:
Copied
The training script creates and saves a diffusion_pytorch_model.bin
file in your repository.
π‘ A full training run takes 2 hours on 4xV100 GPUs.
Copied
Copied
Copied
Youβre ready to launch the now! Specify the dataset name to finetune on with the --dataset_name
argument and then save it to the path in --output_dir
. To use your own dataset, take a look at the guide.
For example, to finetune on the dataset:
Or if you want to train your model on the dataset:
accelerate
allows for seamless multi-GPU training. Follow the instructions for running distributed training with accelerate
. Here is an example command: