Custom Diffusion
Last updated
Last updated
is a method to customize text-to-image models like Stable Diffusion given just a few (4~5) images of a subject. The train_custom_diffusion.py
script shows how to implement the training procedure and adapt it for stable diffusion.
This training example was contributed by (one of the authors of Custom Diffusion).
Before running the scripts, make sure to install the libraryβs training dependencies:
Important
To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Copied
Then cd into the
Copied
Now run
Copied
Copied
Or for a default accelerate configuration without answering questions about your environment
Copied
Or if your environment doesnβt support an interactive shell e.g. a notebook
Copied
We also collect 200 real images using clip-retrieval
which are combined with the target images in the training dataset as a regularization. This prevents overfitting to the the given target image. The following flags enable the regularization with_prior_preservation
, real_prior
with prior_loss_weight=1.
. The class_prompt
should be the category name same as target image. The collected real images are with text captions similar to the class_prompt
. The retrieved image are saved in class_data_dir
. You can disable real_prior
to use generated images as regularization. To collect the real images use this command first before training.
Copied
The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin
file in your repository.
Copied
To track your experiments using Weights and Biases (wandb
) and to save intermediate results (whcih we HIGHLY recommend), follow these steps:
Install wandb
: pip install wandb
.
Authorize: wandb login
.
Then specify a validation_prompt
and set report_to
to wandb
while launching training. You can also configure the following related arguments:
num_validation_images
validation_steps
Here is an example command:
Copied
To collect the real images run this command for each concept in the json file.
Copied
And then weβre ready to start training!
Copied
For fine-tuning on human faces we found the following configuration to work better: learning_rate=5e-6
, max_train_steps=1000 to 2000
, and freeze_model=crossattn
with at least 15-20 images.
To collect the real images use this command first before training.
Copied
Then start training!
Copied
Once you have trained a model using the above command, you can run inference using the below command. Make sure to include the modifier token
(e.g. \in above example) in your prompt.
Copied
Itβs possible to directly load these parameters from a Hub repository:
Copied
Here is an example of performing inference with multiple concepts:
Copied
Here, cat
and wooden pot
refer to the multiple concepts.
You can also perform inference from one of the complete checkpoint saved during the training process, if you used the --checkpointing_steps
argument.
TODO.
To save even more memory, pass the --set_grads_to_none
argument to the script. This will set grads to None instead of zero. However, be aware that it changes certain behaviors, so if you start experiencing any problems, remove this argument.
And initialize an π environment with:
Now letβs get our dataset. Download dataset from and unzip it. To use your own dataset, take a look at the guide.
Note: Change the resolution
to 768 if you are using the 768x768 model.
Use --enable_xformers_memory_efficient_attention
for faster training with lower VRAM requirement (16GB per GPU). Follow for installation instructions.
Here is an example where you can check out the intermediate results along with other training details.
If you specify --push_to_hub
, the learned parameters will be pushed to a repository on the BOINC AI Hub. Here is an .
Provide a file with the info about each concept, similar to .
Here is an example where you can check out the intermediate results along with other training details.
More info:
You can refer to that discusses our experiments in detail.