Custom Diffusion
Custom Diffusion training example
Custom Diffusion is a method to customize text-to-image models like Stable Diffusion given just a few (4~5) images of a subject. The train_custom_diffusion.py
script shows how to implement the training procedure and adapt it for stable diffusion.
This training example was contributed by Nupur Kumari (one of the authors of Custom Diffusion).
Running locally with PyTorch
Installing the dependencies
Before running the scripts, make sure to install the library’s training dependencies:
Important
To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Copied
Then cd into the example folder
Copied
Now run
Copied
And initialize an 🌍 Accelerate environment with:
Copied
Or for a default accelerate configuration without answering questions about your environment
Copied
Or if your environment doesn’t support an interactive shell e.g. a notebook
Copied
Cat example 😺
Now let’s get our dataset. Download dataset from here and unzip it. To use your own dataset, take a look at the Create a dataset for training guide.
We also collect 200 real images using clip-retrieval
which are combined with the target images in the training dataset as a regularization. This prevents overfitting to the the given target image. The following flags enable the regularization with_prior_preservation
, real_prior
with prior_loss_weight=1.
. The class_prompt
should be the category name same as target image. The collected real images are with text captions similar to the class_prompt
. The retrieved image are saved in class_data_dir
. You can disable real_prior
to use generated images as regularization. To collect the real images use this command first before training.
Copied
Note: Change the resolution
to 768 if you are using the stable-diffusion-2 768x768 model.
The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin
file in your repository.
Copied
Use --enable_xformers_memory_efficient_attention
for faster training with lower VRAM requirement (16GB per GPU). Follow this guide for installation instructions.
To track your experiments using Weights and Biases (wandb
) and to save intermediate results (whcih we HIGHLY recommend), follow these steps:
Install
wandb
:pip install wandb
.Authorize:
wandb login
.Then specify a
validation_prompt
and setreport_to
towandb
while launching training. You can also configure the following related arguments:num_validation_images
validation_steps
Here is an example command:
Copied
Here is an example Weights and Biases page where you can check out the intermediate results along with other training details.
If you specify --push_to_hub
, the learned parameters will be pushed to a repository on the BOINC AI Hub. Here is an example repository.
Training on multiple concepts 🐱🪵
Provide a json file with the info about each concept, similar to this.
To collect the real images run this command for each concept in the json file.
Copied
And then we’re ready to start training!
Copied
Here is an example Weights and Biases page where you can check out the intermediate results along with other training details.
Training on human faces
For fine-tuning on human faces we found the following configuration to work better: learning_rate=5e-6
, max_train_steps=1000 to 2000
, and freeze_model=crossattn
with at least 15-20 images.
To collect the real images use this command first before training.
Copied
Then start training!
Copied
Inference
Once you have trained a model using the above command, you can run inference using the below command. Make sure to include the modifier token
(e.g. \in above example) in your prompt.
Copied
It’s possible to directly load these parameters from a Hub repository:
Copied
Here is an example of performing inference with multiple concepts:
Copied
Here, cat
and wooden pot
refer to the multiple concepts.
Inference from a training checkpoint
You can also perform inference from one of the complete checkpoint saved during the training process, if you used the --checkpointing_steps
argument.
TODO.
Set grads to none
To save even more memory, pass the --set_grads_to_none
argument to the script. This will set grads to None instead of zero. However, be aware that it changes certain behaviors, so if you start experiencing any problems, remove this argument.
More info: https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html
Experimental results
You can refer to our webpage that discusses our experiments in detail.
Last updated