BOINC AI PEFT

PEFT

🌍 PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model’s parameters. PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.

PEFT is seamlessly integrated with 🌍 Accelerate for large-scale models leveraging DeepSpeed and Big Model Inference.

Supported methods

Supported models

The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for a task, please refer to the corresponding Task guides.

Causal Language Modeling

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

GPT-2

Bloom

OPT

GPT-Neo

GPT-J

GPT-NeoX-20B

LLaMA

ChatGLM

Conditional Generation

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

T5

BART

Sequence Classification

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

BERT

RoBERTa

GPT-2

Bloom

OPT

GPT-Neo

GPT-J

Deberta

Deberta-v2

Token Classification

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

BERT

RoBERTa

GPT-2

Bloom

OPT

GPT-Neo

GPT-J

Deberta

Deberta-v2

Text-to-Image Generation

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

Stable Diffusion

Image Classification

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

ViT

Swin

Image to text (Multi-modal models)

We have tested LoRA for ViT and Swin for fine-tuning on image classification. However, it should be possible to use LoRA for any ViT-based model from 🌍 Transformers. Check out the Image classification task guide to learn more. If you run into problems, please open an issue.

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

Blip-2

Semantic Segmentation

As with image-to-text models, you should be able to apply LoRA to any of the segmentation models. It’s worth noting that we haven’t tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.

Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3

SegFormer

Last updated