BOINC AI PEFT

PEFT

🌍 PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model’s parameters. PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.

PEFT is seamlessly integrated with 🌍 Accelerate for large-scale models leveraging DeepSpeed and Big Model Inferencearrow-up-right.

Supported methods

Supported models

The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for a task, please refer to the corresponding Task guides.

Causal Language Modeling

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

GPT-2

βœ…

βœ…

βœ…

βœ…

βœ…

Bloom

βœ…

βœ…

βœ…

βœ…

βœ…

OPT

βœ…

βœ…

βœ…

βœ…

βœ…

GPT-Neo

βœ…

βœ…

βœ…

βœ…

βœ…

GPT-J

βœ…

βœ…

βœ…

βœ…

βœ…

GPT-NeoX-20B

βœ…

βœ…

βœ…

βœ…

βœ…

LLaMA

βœ…

βœ…

βœ…

βœ…

βœ…

ChatGLM

βœ…

βœ…

βœ…

βœ…

βœ…

Conditional Generation

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

T5

βœ…

βœ…

βœ…

βœ…

βœ…

BART

βœ…

βœ…

βœ…

βœ…

βœ…

Sequence Classification

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

BERT

βœ…

βœ…

βœ…

βœ…

βœ…

RoBERTa

βœ…

βœ…

βœ…

βœ…

βœ…

GPT-2

βœ…

βœ…

βœ…

βœ…

Bloom

βœ…

βœ…

βœ…

βœ…

OPT

βœ…

βœ…

βœ…

βœ…

GPT-Neo

βœ…

βœ…

βœ…

βœ…

GPT-J

βœ…

βœ…

βœ…

βœ…

Deberta

βœ…

βœ…

βœ…

Deberta-v2

βœ…

βœ…

βœ…

Token Classification

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

BERT

βœ…

βœ…

RoBERTa

βœ…

βœ…

GPT-2

βœ…

βœ…

Bloom

βœ…

βœ…

OPT

βœ…

βœ…

GPT-Neo

βœ…

βœ…

GPT-J

βœ…

βœ…

Deberta

βœ…

Deberta-v2

βœ…

Text-to-Image Generation

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

Stable Diffusion

βœ…

Image Classification

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

ViT

βœ…

Swin

βœ…

Image to text (Multi-modal models)

We have tested LoRA for ViTarrow-up-right and Swinarrow-up-right for fine-tuning on image classification. However, it should be possible to use LoRA for any ViT-based modelarrow-up-right from 🌍 Transformers. Check out the Image classificationarrow-up-right task guide to learn more. If you run into problems, please open an issue.

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

Blip-2

βœ…

Semantic Segmentation

As with image-to-text models, you should be able to apply LoRA to any of the segmentation modelsarrow-up-right. It’s worth noting that we haven’t tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

SegFormer

βœ…

Last updated