BOINC AI PEFT
Last updated
Last updated
π PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the modelβs parameters. PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.
PEFT is seamlessly integrated with π Accelerate for large-scale models leveraging DeepSpeed and .
LoRA:
Prefix Tuning: ,
P-Tuning:
Prompt Tuning:
AdaLoRA:
IA3:
The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for a task, please refer to the corresponding Task guides.
GPT-2
β
β
β
β
β
Bloom
β
β
β
β
β
OPT
β
β
β
β
β
GPT-Neo
β
β
β
β
β
GPT-J
β
β
β
β
β
GPT-NeoX-20B
β
β
β
β
β
LLaMA
β
β
β
β
β
ChatGLM
β
β
β
β
β
T5
β
β
β
β
β
BART
β
β
β
β
β
BERT
β
β
β
β
β
RoBERTa
β
β
β
β
β
GPT-2
β
β
β
β
Bloom
β
β
β
β
OPT
β
β
β
β
GPT-Neo
β
β
β
β
GPT-J
β
β
β
β
Deberta
β
β
β
Deberta-v2
β
β
β
BERT
β
β
RoBERTa
β
β
GPT-2
β
β
Bloom
β
β
OPT
β
β
GPT-Neo
β
β
GPT-J
β
β
Deberta
β
Deberta-v2
β
Stable Diffusion
β
ViT
β
Swin
β
Blip-2
β
SegFormer
β
We have tested LoRA for and for fine-tuning on image classification. However, it should be possible to use LoRA for any from π Transformers. Check out the task guide to learn more. If you run into problems, please open an issue.
As with image-to-text models, you should be able to apply LoRA to any of the . Itβs worth noting that we havenβt tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.