Load and train adapters with BOINC AI PEFT
Last updated
Last updated
Parameter-Efficient Fine Tuning (PEFT) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model.
Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them.
The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.
If you’re interested in learning more about the 🌎PEFT library, check out the documentation.
Get started by installing 🌎 PEFT:
Copied
If you want to try out the brand new features, you might be interested in installing the library from source:
Copied
🌎Transformers natively supports some PEFT methods, meaning you can load adapter weights stored locally or on the Hub and easily run or train them with a few lines of code. The following methods are supported:
If you want to use other PEFT methods, such as prompt learning or prompt tuning, or about the 🌎 PEFT library in general, please refer to the documentation.
To load and use a PEFT adapter model from 🌎 Transformers, make sure the Hub repository or local directory contains an adapter_config.json
file and the adapter weights, as shown in the example image above. Then you can load the PEFT adapter model using the AutoModelFor
class. For example, to load a PEFT adapter model for causal language modeling:
specify the PEFT model id
pass it to the AutoModelForCausalLM class
Copied
You can load a PEFT adapter with either an AutoModelFor
class or the base model class like OPTForCausalLM
or LlamaForCausalLM
.
You can also load a PEFT adapter by calling the load_adapter
method:
Copied
The bitsandbytes
integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the bitsandbytes
integration guide to learn more). Add the load_in_8bit
or load_in_4bit
parameters to from_pretrained() and set device_map="auto"
to effectively distribute the model to your hardware:
Copied
You can use ~peft.PeftModel.add_adapter
to add a new adapter to a model with an existing adapter as long as the new adapter is the same type as the current one. For example, if you have an existing LoRA adapter attached to a model:
Copied
To add a new adapter:
Copied
Now you can use ~peft.PeftModel.set_adapter
to set which adapter to use:
Copied
Once you’ve added an adapter to a model, you can enable or disable the adapter module. To enable the adapter module:
Copied
To disable the adapter module:
Copied
PEFT adapters are supported by the Trainer class so that you can train an adapter for your specific use case. It only requires adding a few more lines of code. For example, to train a LoRA adapter:
If you aren’t familiar with fine-tuning a model with Trainer, take a look at the Fine-tune a pretrained model tutorial.
Define your adapter configuration with the task type and hyperparameters (see ~peft.LoraConfig
for more details about what the hyperparameters do).
Copied
Add adapter to the model.
Copied
Now you can pass the model to Trainer!
Copied
To save your trained adapter and load it back:
Copied