Image classification using LoRA
Last updated
Last updated
This guide demonstrates how to use LoRA, a low-rank approximation technique, to fine-tune an image classification model. By using LoRA from 🌍 PEFT, we can reduce the number of trainable parameters in the model to only 0.77% of the original.
LoRA achieves this reduction by adding low-rank “update matrices” to specific blocks of the model, such as the attention blocks. During fine-tuning, only these matrices are trained, while the original model parameters are left unchanged. At inference time, the update matrices are merged with the original model parameters to produce the final classification result.
For more information on LoRA, please refer to the .
Install the libraries required for model training:
Copied
Check the versions of all required libraries to make sure you are up to date:
Copied
Copied
Copied
Copied
To prepare the dataset for training and evaluation, create label2id
and id2label
dictionaries. These will come in handy when performing inference and for metadata information:
Copied
Next, load the image processor of the model you’re fine-tuning:
Copied
The image_processor
contains useful information on which size the training and evaluation images should be resized to, as well as values that should be used to normalize the pixel values. Using the image_processor
, prepare transformation functions for the datasets. These functions will include data augmentation and pixel scaling:
Copied
Split the dataset into training and validation sets:
Copied
Finally, set the transformation functions for the datasets accordingly:
Copied
Before loading the model, let’s define a helper function to check the total number of parameters a model has, as well as how many of them are trainable.
Copied
Copied
Copied
Before creating a PeftModel
, you can check the number of trainable parameters in the original model:
Copied
Next, use get_peft_model
to wrap the base model so that “update” matrices are added to the respective places.
Copied
Let’s unpack what’s going on here. To use LoRA, you need to specify the target modules in LoraConfig
so that get_peft_model()
knows which modules inside our model need to be amended with LoRA matrices. In this example, we’re only interested in targeting the query and value matrices of the attention blocks of the base model. Since the parameters corresponding to these matrices are “named” “query” and “value” respectively, we specify them accordingly in the target_modules
argument of LoraConfig
.
We also specify modules_to_save
. After wrapping the base model with get_peft_model()
along with the config
, we get a new model where only the LoRA parameters are trainable (so-called “update matrices”) while the pre-trained parameters are kept frozen. However, we want the classifier parameters to be trained too when fine-tuning the base model on our custom dataset. To ensure that the classifier parameters are also trained, we specify modules_to_save
. This also ensures that these modules are serialized alongside the LoRA trainable parameters when using utilities like save_pretrained()
and push_to_hub()
.
Here’s what the other parameters mean:
r
: The dimension used by the LoRA update matrices.
alpha
: Scaling factor.
bias
: Specifies if the bias
parameters should be trained. None
denotes none of the bias
parameters will be trained.
r
and alpha
together control the total number of final trainable parameters when using LoRA, giving you the flexibility to balance a trade-off between end performance and compute efficiency.
By looking at the number of trainable parameters, you can see how many parameters we’re actually training. Since the goal is to achieve parameter-efficient fine-tuning, you should expect to see fewer trainable parameters in the lora_model
in comparison to the original model, which is indeed the case here.
Copied
Compared to non-PEFT methods, you can use a larger batch size since there are fewer parameters to train. You can also set a larger learning rate than the normal (1e-5 for example).
This can potentially also reduce the need to conduct expensive hyperparameter tuning experiments.
Copied
The compute_metrics
function takes a named tuple as input: predictions
, which are the logits of the model as Numpy arrays, and label_ids
, which are the ground-truth labels as Numpy arrays.
Copied
Bring everything together - model, training arguments, data, collation function, etc. Then, start the training!
Copied
In just a few minutes, the fine-tuned model shows 96% validation accuracy even on this small subset of the training dataset.
Copied
Once the fine-tuning is done, share the LoRA parameters with the community like so:
Copied
Next, let’s see how to load the LoRA updated parameters along with our base model for inference. When you wrap a base model with PeftModel
, modifications are done in-place. To mitigate any concerns that might stem from in-place modifications, initialize the base model just like you did earlier and construct the inference model.
Copied
Let’s now fetch an example image for inference.
Copied
First, instantiate an image_processor
from the underlying model repo.
Copied
Then, prepare the example for inference.
Copied
Finally, run inference!
Copied
To share the fine-tuned model at the end of the training with the community, authenticate using your 🌍 token. You can obtain your token from your .
Choose a model checkpoint from any of the model architectures supported for . When in doubt, refer to the in 🌍 Transformers documentation.
To keep this example’s runtime short, let’s only load the first 5000 instances from the training set of the :
It’s important to initialize the original model correctly as it will be used as a base to create the PeftModel
you’ll actually fine-tune. Specify the label2id
and id2label
so that can append a classification head to the underlying model, adapted for this dataset. You should see the following output:
For model fine-tuning, use . It accepts several arguments which you can wrap using .
A collation function is used by to gather a batch of training and evaluation examples and prepare them in a format that is acceptable by the underlying model.
When calling on the lora_model
, only the LoRA parameters along with any modules specified in modules_to_save
are saved. Take a look at the . You’ll see that it’s only 2.6 MB! This greatly helps with portability, especially when using a very large model to fine-tune (such as ).