Dreambooth fine-tuning with LoRA
This guide demonstrates how to use LoRA, a low-rank approximation technique, to fine-tune DreamBooth with the CompVis/stable-diffusion-v1-4
model.
Although LoRA was initially designed as a technique for reducing the number of trainable parameters in large-language models, the technique can also be applied to diffusion models. Performing a complete model fine-tuning of diffusion models is a time-consuming task, which is why lightweight techniques like DreamBooth or Textual Inversion gained popularity. With the introduction of LoRA, customizing and fine-tuning a model on a specific dataset has become even faster.
In this guide we’ll be using a DreamBooth fine-tuning script that is available in PEFT’s GitHub repo. Feel free to explore it and learn how things work.
Set up your environment
Start by cloning the PEFT repository:
Copied
Navigate to the directory containing the training scripts for fine-tuning Dreambooth with LoRA:
Copied
Set up your environment: install PEFT, and all the required libraries. At the time of writing this guide we recommend installing PEFT from source.
Copied
Fine-tuning DreamBooth
Prepare the images that you will use for fine-tuning the model. Set up a few environment variables:
Copied
Here:
INSTANCE_DIR
: The directory containing the images that you intend to use for training your model.CLASS_DIR
: The directory containing class-specific images. In this example, we use prior preservation to avoid overfitting and language-drift. For prior preservation, you need other images of the same class as part of the training process. However, these images can be generated and the training script will save them to a local path you specify here.OUTPUT_DIR
: The destination folder for storing the trained model’s weights.
To learn more about DreamBooth fine-tuning with prior-preserving loss, check out the Diffusers documentation.
Launch the training script with accelerate
and pass hyperparameters, as well as LoRa-specific arguments to it such as:
use_lora
: Enables LoRa in the training script.lora_r
: The dimension used by the LoRA update matrices.lora_alpha
: Scaling factor.lora_text_encoder_r
: LoRA rank for text encoder.lora_text_encoder_alpha
: LoRA alpha (scaling factor) for text encoder.
Here’s what the full set of script arguments may look like:
Copied
Inference with a single adapter
To run inference with the fine-tuned model, first specify the base model with which the fine-tuned LoRA weights will be combined:
Copied
Next, add a function that will create a Stable Diffusion pipeline for image generation. It will combine the weights of the base model with the fine-tuned LoRA weights using LoraConfig
.
Copied
Now you can use the function above to create a Stable Diffusion pipeline using the LoRA weights that you have created during the fine-tuning step.
Note, if you’re running inference on the same machine, the path you specify here will be the same as OUTPUT_DIR
.
Copied
Once you have the pipeline with your fine-tuned model, you can use it to generate images:
Copied
Multi-adapter inference
With PEFT you can combine multiple adapters for inference. In the previous example you have fine-tuned Stable Diffusion on some dog images. The pipeline created based on these weights got a name - adapter_name="dog
. Now, suppose you also fine-tuned this base model on images of a crochet toy. Let’s see how we can use both adapters.
First, you’ll need to perform all the steps as in the single adapter inference example:
Specify the base model.
Add a function that creates a Stable Diffusion pipeline for image generation uses LoRA weights.
Create a
pipe
withadapter_name="dog"
based on the model fine-tuned on dog images.
Next, you’re going to need a few more helper functions. To load another adapter, create a load_adapter()
function that leverages load_adapter()
method of PeftModel
(e.g. pipe.unet.load_adapter(peft_model_path, adapter_name)
):
Copied
To switch between adapters, write a function that uses set_adapter()
method of PeftModel
(see pipe.unet.set_adapter(adapter_name)
)
Copied
Finally, add a function to create weighted LoRA adapter.
Copied
Let’s load the second adapter from the model fine-tuned on images of a crochet toy, and give it a unique name:
Copied
Create a pipeline using weighted adapters:
Copied
Now you can switch between adapters. If you’d like to generate more dog images, set the adapter to "dog"
:
Copied
In the same way, you can switch to the second adapter:
Copied
Finally, you can use combined weighted adapters:
Copied
Last updated