PEFT
  • 🌍GET STARTED
    • BOINC AI PEFT
    • Quicktour
    • Installation
  • 🌍TASK GUIDES
    • Image classification using LoRA
    • Prefix tuning for conditional generation
    • Prompt tuning for causal language modeling
    • Semantic segmentation using LoRA
    • P-tuning for sequence classification
    • Dreambooth fine-tuning with LoRA
    • LoRA for token classification
    • int8 training for automatic speech recognition
    • Semantic similarity with LoRA
  • 🌍DEVELOPER GUIDES
    • Working with custom models
    • PEFT low level API
    • Contributing to PEFT
    • Troubleshooting
  • 🌍ACCELERATE INTEGRATIONS
    • DeepSpeed
    • PagFully Sharded Data Parallele 2
  • 🌍CONCEPTUAL GUIDES
    • LoRA
    • Prompting
    • IA3
  • 🌍REFERENCE
    • PEFT model
    • Configuration
    • Tuners
Powered by GitBook
On this page
  • Models
  • PeftModel
  • PeftModelForSequenceClassification
  • PeftModelForTokenClassification
  • PeftModelForCausalLM
  • PeftModelForSeq2SeqLM
  • PeftModelForQuestionAnswering
  • PeftModelForFeatureExtraction
  1. REFERENCE

PEFT model

PreviousREFERENCENextConfiguration

Last updated 1 year ago

Models

is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base PeftModel contains methods for loading and saving models from the Hub, and supports the for prompt learning.

PeftModel

class peft.PeftModel

( model: PreTrainedModelpeft_config: PeftConfigadapter_name: str = 'default' )

Parameters

  • model () — The base transformer model used for Peft.

  • peft_config () — The configuration of the Peft model.

  • adapter_name (str) — The name of the adapter, defaults to "default".

Base model encompassing various Peft methods.

Attributes:

  • base_model () — The base transformer model used for Peft.

  • peft_config () — The configuration of the Peft model.

  • modules_to_save (list of str) — The list of sub-module names to save when saving the model.

  • prompt_encoder () — The prompt encoder used for Peft if using .

  • prompt_tokens (torch.Tensor) — The virtual prompt tokens used for Peft if using .

  • transformer_backbone_name (str) — The name of the transformer backbone in the base model if using .

  • word_embeddings (torch.nn.Embedding) — The word embeddings of the transformer backbone in the base model if using .

create_or_update_model_card

( output_dir: str )

Updates or create model card to include information about peft:

  1. Adds peft library tag

  2. Adds peft version

  3. Adds base model info

  4. Adds quantization information if it was used

disable_adapter

( )

Disables the adapter module.

forward

( *args: Any**kwargs: Any )

Forward pass of the model.

from_pretrained

( model: PreTrainedModelmodel_id: Union[str, os.PathLike]adapter_name: str = 'default'is_trainable: bool = Falseconfig: Optional[PeftConfig] = None**kwargs: Any )

Parameters

  • model_id (str or os.PathLike) — The name of the PEFT configuration to use. Can be either:

    • A string, the model id of a PEFT configuration hosted inside a model repo on the BOINC AI Hub.

    • A path to a directory containing a PEFT configuration file saved using the save_pretrained method (./my_peft_config_directory/).

  • adapter_name (str, optional, defaults to "default") — The name of the adapter to be loaded. This is useful for loading multiple adapters.

  • is_trainable (bool, optional, defaults to False) — Whether the adapter should be trainable or not. If False, the adapter will be frozen and use for inference

Instantiate a PEFT model from a pretrained model and loaded PEFT weights.

Note that the passed model may be modified inplace.

get_base_model

( )

Returns the base model.

get_nb_trainable_parameters

( )

Returns the number of trainable parameters and number of all parameters in the model.

get_prompt

( batch_size: inttask_ids: Optional[torch.Tensor] = None )

Returns the virtual prompts to use for Peft. Only applicable when peft_config.peft_type != PeftType.LORA.

get_prompt_embedding_to_save

( adapter_name: str )

Returns the prompt embedding to save when saving the model. Only applicable when peft_config.peft_type != PeftType.LORA.

print_trainable_parameters

( )

Prints the number of trainable parameters in the model.

save_pretrained

( save_directory: strsafe_serialization: bool = Falseselected_adapters: Optional[List[str]] = None**kwargs: Any )

Parameters

  • save_directory (str) — Directory where the adapter model and configuration files will be saved (will be created if it does not exist).

  • kwargs (additional keyword arguments, optional) — Additional keyword arguments passed along to the push_to_hub method.

set_adapter

( adapter_name: str )

Sets the active adapter.

PeftModelForSequenceClassification

A PeftModel for sequence classification tasks.

class peft.PeftModelForSequenceClassification

( modelpeft_config: PeftConfigadapter_name = 'default' )

Parameters

Peft model for sequence classification tasks.

Attributes:

  • cls_layer_name (str) — The name of the classification layer.

Example:

Copied

>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForSequenceClassification, get_peft_config

>>> config = {
...     "peft_type": "PREFIX_TUNING",
...     "task_type": "SEQ_CLS",
...     "inference_mode": False,
...     "num_virtual_tokens": 20,
...     "token_dim": 768,
...     "num_transformer_submodules": 1,
...     "num_attention_heads": 12,
...     "num_layers": 12,
...     "encoder_hidden_size": 768,
...     "prefix_projection": False,
...     "postprocess_past_key_value_function": None,
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForSequenceClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117

PeftModelForTokenClassification

A PeftModel for token classification tasks.

class peft.PeftModelForTokenClassification

( modelpeft_config: PeftConfig = Noneadapter_name = 'default' )

Parameters

Peft model for token classification tasks.

Attributes:

  • cls_layer_name (str) — The name of the classification layer.

Example:

Copied

>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config

>>> config = {
...     "peft_type": "PREFIX_TUNING",
...     "task_type": "TOKEN_CLS",
...     "inference_mode": False,
...     "num_virtual_tokens": 20,
...     "token_dim": 768,
...     "num_transformer_submodules": 1,
...     "num_attention_heads": 12,
...     "num_layers": 12,
...     "encoder_hidden_size": 768,
...     "prefix_projection": False,
...     "postprocess_past_key_value_function": None,
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117

PeftModelForCausalLM

A PeftModel for causal language modeling.

class peft.PeftModelForCausalLM

( modelpeft_config: PeftConfigadapter_name = 'default' )

Parameters

Peft model for causal language modeling.

Example:

Copied

>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModelForCausalLM, get_peft_config

>>> config = {
...     "peft_type": "PREFIX_TUNING",
...     "task_type": "CAUSAL_LM",
...     "inference_mode": False,
...     "num_virtual_tokens": 20,
...     "token_dim": 1280,
...     "num_transformer_submodules": 1,
...     "num_attention_heads": 20,
...     "num_layers": 36,
...     "encoder_hidden_size": 1280,
...     "prefix_projection": False,
...     "postprocess_past_key_value_function": None,
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2-large")
>>> peft_model = PeftModelForCausalLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 1843200 || all params: 775873280 || trainable%: 0.23756456724479544

PeftModelForSeq2SeqLM

A PeftModel for sequence-to-sequence language modeling.

class peft.PeftModelForSeq2SeqLM

( modelpeft_config: PeftConfigadapter_name = 'default' )

Parameters

Peft model for sequence-to-sequence language modeling.

Example:

Copied

>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import PeftModelForSeq2SeqLM, get_peft_config

>>> config = {
...     "peft_type": "LORA",
...     "task_type": "SEQ_2_SEQ_LM",
...     "inference_mode": False,
...     "r": 8,
...     "target_modules": ["q", "v"],
...     "lora_alpha": 32,
...     "lora_dropout": 0.1,
...     "fan_in_fan_out": False,
...     "enable_lora": None,
...     "bias": "none",
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 884736 || all params: 223843584 || trainable%: 0.3952474242013566

PeftModelForQuestionAnswering

A PeftModel for question answering.

class peft.PeftModelForQuestionAnswering

( modelpeft_config: PeftConfig = Noneadapter_name = 'default' )

Parameters

Peft model for extractive question answering.

Attributes:

  • cls_layer_name (str) — The name of the classification layer.

Example:

Copied

>>> from transformers import AutoModelForQuestionAnswering
>>> from peft import PeftModelForQuestionAnswering, get_peft_config

>>> config = {
...     "peft_type": "LORA",
...     "task_type": "QUESTION_ANS",
...     "inference_mode": False,
...     "r": 16,
...     "target_modules": ["query", "value"],
...     "lora_alpha": 32,
...     "lora_dropout": 0.05,
...     "fan_in_fan_out": False,
...     "bias": "none",
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForQuestionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580 || trainable%: 0.5473971721475013

PeftModelForFeatureExtraction

A PeftModel for getting extracting features/embeddings from transformer models.

class peft.PeftModelForFeatureExtraction

( modelpeft_config: PeftConfig = Noneadapter_name = 'default' )

Parameters

Peft model for extracting features/embeddings from transformer models

Attributes:

Example:

Copied

>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtraction, get_peft_config

>>> config = {
...     "peft_type": "LORA",
...     "task_type": "FEATURE_EXTRACTION",
...     "inference_mode": False,
...     "r": 16,
...     "target_modules": ["query", "value"],
...     "lora_alpha": 32,
...     "lora_dropout": 0.05,
...     "fan_in_fan_out": False,
...     "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModel.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForFeatureExtraction(model, peft_config)
>>> peft_model.print_trainable_parameters()

model () — The model to be adapted. The model should be initialized with the method from the 🌍 Transformers library.

config (, optional) — The configuration object to use instead of an automatically loaded configuation. This configuration object is mutually exclusive with model_id and kwargs. This is useful when configuration is already loaded before calling from_pretrained. kwargs — (optional): Additional keyword arguments passed along to the specific PEFT configuration class.

This function saves the adapter model and the adapter configuration files to a directory, so that it can be reloaded using the class method, and also used by the PeftModel.push_to_hub() method.

model () — Base transformer model.

peft_config () — Peft config.

config () — The configuration object of the base model.

model () — Base transformer model.

peft_config () — Peft config.

config () — The configuration object of the base model.

model () — Base transformer model.

peft_config () — Peft config.

model () — Base transformer model.

peft_config () — Peft config.

model () — Base transformer model.

peft_config () — Peft config.

config () — The configuration object of the base model.

model () — Base transformer model.

peft_config () — Peft config.

config () — The configuration object of the base model.

🌍
PeftModel
PromptEncoder
<source>
PreTrainedModel
PeftConfig
PreTrainedModel
PeftConfig
PromptEncoder
PromptLearningConfig
PromptLearningConfig
PromptLearningConfig
PromptLearningConfig
<source>
<source>
<source>
<source>
PreTrainedModel
from_pretrained
PeftConfig
<source>
<source>
<source>
<source>
<source>
<source>
PeftModel.from_pretrained()
<source>
<source>
PreTrainedModel
PeftConfig
PretrainedConfig
<source>
PreTrainedModel
PeftConfig
PretrainedConfig
<source>
PreTrainedModel
PeftConfig
<source>
PreTrainedModel
PeftConfig
<source>
PreTrainedModel
PeftConfig
PretrainedConfig
<source>
PreTrainedModel
PeftConfig
PretrainedConfig