PEFT
  • 🌍GET STARTED
    • BOINC AI PEFT
    • Quicktour
    • Installation
  • 🌍TASK GUIDES
    • Image classification using LoRA
    • Prefix tuning for conditional generation
    • Prompt tuning for causal language modeling
    • Semantic segmentation using LoRA
    • P-tuning for sequence classification
    • Dreambooth fine-tuning with LoRA
    • LoRA for token classification
    • int8 training for automatic speech recognition
    • Semantic similarity with LoRA
  • 🌍DEVELOPER GUIDES
    • Working with custom models
    • PEFT low level API
    • Contributing to PEFT
    • Troubleshooting
  • 🌍ACCELERATE INTEGRATIONS
    • DeepSpeed
    • PagFully Sharded Data Parallele 2
  • 🌍CONCEPTUAL GUIDES
    • LoRA
    • Prompting
    • IA3
  • 🌍REFERENCE
    • PEFT model
    • Configuration
    • Tuners
Powered by GitBook
On this page
  • Configuration
  • PeftConfigMixin
  • PeftConfig
  • PromptLearningConfig
  1. REFERENCE

Configuration

PreviousPEFT modelNextTuners

Last updated 1 year ago

Configuration

The configuration classes stores the configuration of a , PEFT adapter models, and the configurations of PrefixTuning, PromptTuning, and . They contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.

PeftConfigMixin

class peft.config.PeftConfigMixin

( peft_type: typing.Optional[peft.utils.peft_types.PeftType] = Noneauto_mapping: typing.Optional[dict] = None )

Parameters

  • peft_type (Union[~peft.utils.config.PeftType, str]) — The type of Peft method to use.

This is the base configuration class for PEFT adapter models. It contains all the methods that are common to all PEFT adapter models. This class inherits from which contains the methods to push your model to the Hub. The method save_pretrained will save the configuration of your adapter model in a directory. The method from_pretrained will load the configuration of your adapter model from a directory.

from_json_file

( path_json_file: str**kwargs )

Parameters

  • path_json_file (str) — The path to the json file.

Loads a configuration file from a json file.

from_pretrained

( pretrained_model_name_or_path: strsubfolder: typing.Optional[str] = None**kwargs )

Parameters

  • pretrained_model_name_or_path (str) — The directory or the Hub repository id where the configuration is saved.

  • kwargs (additional keyword arguments, optional) — Additional keyword arguments passed along to the child class initialization.

This method loads the configuration of your adapter model from a directory.

save_pretrained

( save_directory: str**kwargs )

Parameters

  • save_directory (str) — The directory where the configuration will be saved.

This method saves the configuration of your adapter model in a directory.

PeftConfig

class peft.PeftConfig

( peft_type: typing.Union[str, peft.utils.peft_types.PeftType] = Noneauto_mapping: typing.Optional[dict] = Nonebase_model_name_or_path: str = Nonerevision: str = Nonetask_type: typing.Union[str, peft.utils.peft_types.TaskType] = Noneinference_mode: bool = False )

Parameters

  • peft_type (Union[~peft.utils.config.PeftType, str]) — The type of Peft method to use.

  • task_type (Union[~peft.utils.config.TaskType, str]) — The type of task to perform.

  • inference_mode (bool, defaults to False) — Whether to use the Peft model in inference mode.

PromptLearningConfig

class peft.PromptLearningConfig

( peft_type: typing.Union[str, peft.utils.peft_types.PeftType] = Noneauto_mapping: typing.Optional[dict] = Nonebase_model_name_or_path: str = Nonerevision: str = Nonetask_type: typing.Union[str, peft.utils.peft_types.TaskType] = Noneinference_mode: bool = Falsenum_virtual_tokens: int = Nonetoken_dim: int = Nonenum_transformer_submodules: typing.Optional[int] = Nonenum_attention_heads: typing.Optional[int] = Nonenum_layers: typing.Optional[int] = None )

Parameters

  • num_virtual_tokens (int) — The number of virtual tokens to use.

  • token_dim (int) — The hidden embedding dimension of the base transformer model.

  • num_transformer_submodules (int) — The number of transformer submodules in the base transformer model.

  • num_attention_heads (int) — The number of attention heads in the base transformer model.

  • num_layers (int) — The number of layers in the base transformer model.

kwargs (additional keyword arguments, optional) — Additional keyword arguments passed along to the method.

This is the base configuration class to store the configuration of a .

This is the base configuration class to store the configuration of PrefixTuning, , or PromptTuning.

🌍
PeftModel
PromptEncoder
<source>
PushToHubMixin
<source>
<source>
<source>
push_to_hub
<source>
PeftModel
<source>
PromptEncoder