Working with custom models
Last updated
Last updated
Some fine-tuning techniques, such as prompt tuning, are specific to language models. That means in ๐ PEFT, it is assumed a ๐ Transformers model is being used. However, other fine-tuning techniques - like - are not restricted to specific model types.
In this guide, we will see how LoRA can be applied to a multilayer perceptron and a computer vision model from the library.
Letโs assume that we want to fine-tune a multilayer perceptron with LoRA. Here is the definition:
Copied
This is a straightforward multilayer perceptron with an input layer, a hidden layer, and an output layer.
For this toy example, we choose an exceedingly large number of hidden units to highlight the efficiency gains from PEFT, but those gains are in line with more realistic examples.
There are a few linear layers in this model that could be tuned with LoRA. When working with common ๐ Transformers models, PEFT will know which layers to apply LoRA to, but in this case, it is up to us as a user to choose the layers. To determine the names of the layers to tune:
Copied
This should print:
Copied
Letโs say we want to apply LoRA to the input layer and to the hidden layer, those are 'seq.0'
and 'seq.2'
. Moreover, letโs assume we want to update the output layer without LoRA, that would be 'seq.4'
. The corresponding config would be:
Copied
With that, we can create our PEFT model and check the fraction of parameters trained:
Copied
Finally, we can use any training framework we like, or write our own fit loop, to train the peft_model
.
To start, ensure that timm is installed in the Python environment:
Copied
Next we load a timm model for an image classification task:
Copied
Again, we need to make a decision about what layers to apply LoRA to. Since LoRA supports 2D conv layers, and since those are a major building block of this model, we should apply LoRA to the 2D conv layers. To identify the names of those layers, letโs look at all the layer names:
Copied
This will print a very long list, weโll only show the first few:
Copied
Furthermore, as in the first example, we should ensure that the output layer, in this case the classification head, is also updated. Looking at the end of the list printed above, we can see that itโs named 'head.fc'
. With that in mind, here is our LoRA config:
Copied
Then we only need to create the PEFT model by passing our base model and the config to get_peft_model
:
Copied
This shows us that we only need to train less than 2% of all parameters, which is a huge efficiency gain.
For a complete example, check out .
The library contains a large number of pretrained computer vision models. Those can also be fine-tuned with PEFT. Letโs check out how this works in practice.
Upon closer inspection, we see that the 2D conv layers have names such as "stages.0.blocks.0.mlp.fc1"
and "stages.0.blocks.0.mlp.fc2"
. How can we match those layer names specifically? You can write a to match the layer names. For our case, the regex r".*\.mlp\.fc\d"
should do the job.
For a complete example, check out .