Trainer Classes
Trainer
At TRL we support PPO (Proximal Policy Optimisation) with an implementation that largely follows the structure introduced in the paper βFine-Tuning Language Models from Human Preferencesβ by D. Ziegler et al. [paper, code]. The Trainer and model classes are largely inspired from transformers.Trainer
and transformers.AutoModel
classes and adapted for RL. We also support a RewardTrainer
that can be used to train a reward model.
PPOConfig
class trl.PPOConfig
( exp_name: str = 'doc-buil'seed: int = 0log_with: typing.Union[typing.Literal['wandb', 'tensorboard'], NoneType] = Nonetask_name: typing.Optional[str] = Nonemodel_name: typing.Optional[str] = Nonequery_dataset: typing.Optional[str] = Nonereward_model: typing.Optional[str] = Noneremove_unused_columns: bool = Truetracker_kwargs: dict = <factory>accelerator_kwargs: dict = <factory>project_kwargs: dict = <factory>tracker_project_name: str = 'trl'push_to_hub_if_best_kwargs: dict = <factory>steps: int = 20000learning_rate: float = 1e-05adap_kl_ctrl: bool = Trueinit_kl_coef: typing.Optional[float] = 0.2kl_penalty: typing.Literal['kl', 'abs', 'mse', 'full'] = 'kl'target: typing.Optional[float] = 6horizon: typing.Optional[float] = 10000gamma: float = 1lam: float = 0.95cliprange: float = 0.2cliprange_value: float = 0.2vf_coef: float = 0.1batch_size: int = 256forward_batch_size: typing.Optional[int] = Nonemini_batch_size: int = 1gradient_accumulation_steps: int = 1world_size: typing_extensions.Annotated[int, Suppress] = Noneppo_epochs: int = 4max_grad_norm: typing.Optional[float] = Noneoptimize_cuda_cache: bool = Falseearly_stopping: bool = Falsetarget_kl: float = 1compare_steps: int = 1ratio_threshold: float = 10.0use_score_scaling: bool = Falseuse_score_norm: bool = Falsescore_clip: typing.Optional[float] = Noneis_encoder_decoder: typing.Union[typing_extensions.Annotated[bool, Suppress], NoneType] = Noneis_peft_model: typing.Union[typing_extensions.Annotated[bool, Suppress], NoneType] = Nonebackward_batch_size: typing_extensions.Annotated[int, Suppress] = Noneglobal_backward_batch_size: typing_extensions.Annotated[int, Suppress] = Noneglobal_batch_size: typing_extensions.Annotated[int, Suppress] = None )
Configuration class for PPOTrainer
PPOTrainer
class trl.PPOTrainer
( config: PPOConfig = Nonemodel: PreTrainedModelWrapper = Noneref_model: typing.Optional[trl.models.modeling_base.PreTrainedModelWrapper] = Nonetokenizer: PreTrainedTokenizerBase = Nonedataset: typing.Union[torch.utils.data.dataset.Dataset, datasets.arrow_dataset.Dataset, NoneType] = Noneoptimizer: typing.Optional[torch.optim.optimizer.Optimizer] = Nonedata_collator: typing.Optional[typing.Callable] = Nonenum_shared_layers: typing.Optional[int] = Nonelr_scheduler: typing.Optional[torch.optim.lr_scheduler._LRScheduler] = None )
Parameters
**config** (
PPOConfig
) β Configuration object for PPOTrainer. Check the documentation ofPPOConfig
for more β details.**model** (
PreTrainedModelWrapper
) β Model to be optimized, BOINC AI transformer model with a value head. β Check the documentation ofPreTrainedModelWrapper
for more details.**ref_model** (
PreTrainedModelWrapper
, optional) β Reference model to be used for KL penalty, BOINC AI β transformer model with a casual language modelling head. Check the documentation ofPreTrainedModelWrapper
for more details. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized with shared layers.**tokenizer** (
PreTrainedTokenizerBase
) β Tokenizer to be used for encoding the β data. Check the documentation oftransformers.PreTrainedTokenizer
andtransformers.PreTrainedTokenizerFast
for more details.**dataset** (Union[
torch.utils.data.Dataset
,datasets.Dataset
], optional) β PyTorch dataset or BOINC AI dataset. This is used to create a PyTorch dataloader. If no dataset is provided, the dataloader must be created outside the trainer users needs to design their own dataloader and make sure the batch size that is used is the same as the one specified in the configuration object.**optimizer** (
torch.optim.Optimizer
, optional) β Optimizer to be used for training. If no optimizer is β provided, the trainer will create an Adam optimizer with the learning rate specified in the configuration object.**data_collator** (DataCollatorForLanguageModeling, optional) β Data collator to be used for training and β passed along the dataloader
**num_shared_layers** (int, optional) β Number of layers to be shared between the model and the reference β model, if no reference model is passed. If no number is provided, all the layers will be shared.
**lr_scheduler** (
torch.optim.lr_scheduler
, optional) β Learning rate scheduler to be used for training. β
The PPOTrainer uses Proximal Policy Optimization to optimise language models. Note, this trainer is heavily inspired by the original OpenAI learning to summarize work here: https://github.com/openai/summarize-from-feedback
batched_forward_pass
( model: PreTrainedModelWrapperqueries: Tensorresponses: Tensormodel_inputs: dictreturn_logits: bool = Falseresponse_masks: typing.Optional[torch.Tensor] = None ) β (tuple)
Parameters
queries (
torch.LongTensor
) β List of tensors containing the encoded queries, shape (batch_size
,query_length
)responses (
torch.LongTensor
) β List of tensors containing the encoded responses, shape (batch_size
,response_length
)return_logits (
bool
, optional, defaults toFalse
) β Whether to return all_logits. Set toFalse
if logits are not needed to reduce memory consumption.
Returns
(tuple)
all_logprobs (
torch.FloatTensor
): Log probabilities of the responses, shape (batch_size
,response_length
)all_ref_logprobs (
torch.FloatTensor
): Log probabilities of the responses, shape (batch_size
,response_length
)all_values (
torch.FloatTensor
): Values of the responses, shape (batch_size
,response_length
)
Calculate model outputs in multiple batches.
compute_rewards
( scores: FloatTensorlogprobs: FloatTensorref_logprobs: FloatTensormasks: LongTensor )
Parameters
scores (
torch.FloatTensor
) β Scores from the reward model, shape (batch_size
)logprobs (
torch.FloatTensor
) β Log probabilities of the model, shape (batch_size
,response_length
)ref_logprobs (
torch.FloatTensor
) β Log probabilities of the reference model, shape (batch_size
,response_length
)
Compute per token rewards from scores and KL-penalty.
create_model_card
( path: strmodel_name: typing.Optional[str] = 'TRL Model' )
Parameters
path (
str
) β The path to save the model card to.model_name (
str
, optional) β The name of the model, defaults toTRL Model
.
Creates and saves a model card for a TRL model.
gather_stats
( stats ) β dict[str, Any]
Parameters
stats (dict[str, Any]) β
a dictionary of stats to be gathered. The stats should contain torch tensors. β
Returns
dict[str, Any]
A dictionary of stats with the tensors gathered.
Gather stats from all processes. Useful in the context of distributed training.
generate
( query_tensor: typing.Union[torch.Tensor, typing.List[torch.Tensor]]length_sampler: typing.Callable = Nonebatch_size: int = 4return_prompt: bool = True**generation_kwargs ) β torch.LongTensor
Parameters
query_tensor (
torch.LongTensor
) β A tensor of shape (seq_len
) containing query tokens or a list of tensors of shape (seq_len
).generation_kwargs (dict[str, Any]) β Keyword arguments for generation.
length_sampler (
Callable
, optional) β Callable that returns the number of newly generated tokens.batch_size (
int
, *optional) β Batch size used for generation, defaults to4
.return_prompt (
bool
, optional) β If set toFalse
the prompt is not returned but only the newly generated tokens, defaults toTrue
.
Returns
torch.LongTensor
A tensor of shape (batch_size
, gen_len
) containing response tokens.
Generate response with the model given the query tensor. call the generate
method of the model.
log_stats
( stats: dictbatch: dictrewards: typing.List[torch.FloatTensor]columns_to_log: typing.List[str] = ['query', 'response'] )
Parameters
stats (dict[str, Any]) β A dictionary of training stats.
batch (dict[str, Any]) β A dictionary of batch data, this contains the queries and responses.
rewards (
List[torch.FloatTensor]
) β A tensor of rewards.
A function that logs all the training stats. Call it at the end of each epoch.
loss
( old_logprobs: FloatTensorvalues: FloatTensorlogits: FloatTensorvpreds: FloatTensorlogprobs: FloatTensormask: LongTensoradvantages: FloatTensorreturns: FloatTensor )
Parameters
old_logprobs (
torch.FloatTensor
) β Log probabilities of the model, shape (batch_size
,response_length
)values (
torch.FloatTensor
) β Values of the value head, shape (batch_size
,response_length
)rewards (
torch.FloatTensor
) β Rewards from the reward model, shape (batch_size
,response_length
)logits (
torch.FloatTensor
) β Logits of the model, shape (batch_size
,response_length
,vocab_size
)v_pred (
torch.FloatTensor
) β Values of the value head, shape (batch_size
,response_length
)logprobs (
torch.FloatTensor
) β Log probabilities of the model, shape (batch_size
,response_length
)
Calculate policy and value losses.
prepare_dataloader
( dataset: typing.Union[torch.utils.data.dataset.Dataset, datasets.arrow_dataset.Dataset]data_collator = None ) β torch.utils.data.DataLoader
Parameters
dataset (Union[
torch.utils.data.Dataset
,datasets.Dataset
]) β PyTorch dataset or BOINC AI dataset. If a BOINC AI dataset is passed, the dataset will be preprocessed by removing the columns that are not used by the model.data_collator (Optional[function]) β Data collator function.
Returns
torch.utils.data.DataLoader
PyTorch dataloader
Prepare the dataloader for training.
record_step_stats
( kl_coef: float**data ) β stats (dict
)
Parameters
kl_coef (
float
) β KL coefficientdata (
dict
) β Dictionary of training step data
Returns
stats (dict
)
Dictionary of training step statistics
Record training step statistics.
step
( queries: typing.List[torch.LongTensor]responses: typing.List[torch.LongTensor]scores: typing.List[torch.FloatTensor]response_masks: typing.Optional[typing.List[torch.LongTensor]] = None ) β dict[str, Any]
Parameters
queries (List
torch.LongTensor
) β List of tensors containing the encoded queries of shape (query_length
)responses (List
torch.LongTensor
) β List of tensors containing the encoded responses of shape (response_length
)scores (List
torch.FloatTensor
) β List of tensors containing the scores.response_masks (List
torch.FloatTensor
, optional)) β List of tensors containing masks of the response tokens.
Returns
dict[str, Any]
A summary of the training statistics
Run a PPO optimisation step given a list of queries, model responses, and rewards.
train_minibatch
( old_logprobs: FloatTensorvalues: FloatTensorlogprobs: FloatTensorlogits: FloatTensorvpreds: FloatTensormask: LongTensoradvantages: FloatTensorreturns: FloatTensor ) β train_stats (dict[str, torch.Tensor
])
Parameters
logprobs (
torch.FloatTensor
) β Log probabilities of the model, shape [batch_size, response_length]values (
torch.FloatTensor
) β Values of the value head, shape [batch_size, response_length]query (
torch.LongTensor
) β Encoded queries, shape [batch_size, query_length]response (
torch.LongTensor
) β Encoded responses, shape [batch_size, response_length]model_input (
torch.LongTensor
) β Concatenated queries and responses, shape [batch_size, query_length+response_length]
Returns
train_stats (dict[str, torch.Tensor
])
Dictionary of training statistics
Train one PPO minibatch
RewardConfig
class trl.RewardConfig
( output_dir: stroverwrite_output_dir: bool = Falsedo_train: bool = Falsedo_eval: bool = Falsedo_predict: bool = Falseevaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no'prediction_loss_only: bool = Falseper_device_train_batch_size: int = 8per_device_eval_batch_size: int = 8per_gpu_train_batch_size: typing.Optional[int] = Noneper_gpu_eval_batch_size: typing.Optional[int] = Nonegradient_accumulation_steps: int = 1eval_accumulation_steps: typing.Optional[int] = Noneeval_delay: typing.Optional[float] = 0learning_rate: float = 5e-05weight_decay: float = 0.0adam_beta1: float = 0.9adam_beta2: float = 0.999adam_epsilon: float = 1e-08max_grad_norm: float = 1.0num_train_epochs: float = 3.0max_steps: int = -1lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear'warmup_ratio: float = 0.0warmup_steps: int = 0log_level: typing.Optional[str] = 'passive'log_level_replica: typing.Optional[str] = 'warning'log_on_each_node: bool = Truelogging_dir: typing.Optional[str] = Nonelogging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'logging_first_step: bool = Falselogging_steps: float = 500logging_nan_inf_filter: bool = Truesave_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'save_steps: float = 500save_total_limit: typing.Optional[int] = Nonesave_safetensors: typing.Optional[bool] = Falsesave_on_each_node: bool = Falseno_cuda: bool = Falseuse_cpu: bool = Falseuse_mps_device: bool = Falseseed: int = 42data_seed: typing.Optional[int] = Nonejit_mode_eval: bool = Falseuse_ipex: bool = Falsebf16: bool = Falsefp16: bool = Falsefp16_opt_level: str = 'O1'half_precision_backend: str = 'auto'bf16_full_eval: bool = Falsefp16_full_eval: bool = Falsetf32: typing.Optional[bool] = Nonelocal_rank: int = -1ddp_backend: typing.Optional[str] = Nonetpu_num_cores: typing.Optional[int] = Nonetpu_metrics_debug: bool = Falsedebug: typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = ''dataloader_drop_last: bool = Falseeval_steps: typing.Optional[float] = Nonedataloader_num_workers: int = 0past_index: int = -1run_name: typing.Optional[str] = Nonedisable_tqdm: typing.Optional[bool] = Noneremove_unused_columns: typing.Optional[bool] = Truelabel_names: typing.Optional[typing.List[str]] = Noneload_best_model_at_end: typing.Optional[bool] = Falsemetric_for_best_model: typing.Optional[str] = Nonegreater_is_better: typing.Optional[bool] = Noneignore_data_skip: bool = Falsefsdp: typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = ''fsdp_min_num_params: int = 0fsdp_config: typing.Optional[str] = Nonefsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = Nonedeepspeed: typing.Optional[str] = Nonelabel_smoothing_factor: float = 0.0optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch'optim_args: typing.Optional[str] = Noneadafactor: bool = Falsegroup_by_length: bool = Falselength_column_name: typing.Optional[str] = 'length'report_to: typing.Optional[typing.List[str]] = Noneddp_find_unused_parameters: typing.Optional[bool] = Noneddp_bucket_cap_mb: typing.Optional[int] = Noneddp_broadcast_buffers: typing.Optional[bool] = Nonedataloader_pin_memory: bool = Trueskip_memory_metrics: bool = Trueuse_legacy_prediction_loop: bool = Falsepush_to_hub: bool = Falseresume_from_checkpoint: typing.Optional[str] = Nonehub_model_id: typing.Optional[str] = Nonehub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save'hub_token: typing.Optional[str] = Nonehub_private_repo: bool = Falsehub_always_push: bool = Falsegradient_checkpointing: typing.Optional[bool] = Trueinclude_inputs_for_metrics: bool = Falsefp16_backend: str = 'auto'push_to_hub_model_id: typing.Optional[str] = Nonepush_to_hub_organization: typing.Optional[str] = Nonepush_to_hub_token: typing.Optional[str] = Nonemp_parameters: str = ''auto_find_batch_size: bool = Falsefull_determinism: bool = Falsetorchdynamo: typing.Optional[str] = Noneray_scope: typing.Optional[str] = 'last'ddp_timeout: typing.Optional[int] = 1800torch_compile: bool = Falsetorch_compile_backend: typing.Optional[str] = Nonetorch_compile_mode: typing.Optional[str] = Nonedispatch_batches: typing.Optional[bool] = Noneinclude_tokens_per_second: typing.Optional[bool] = Falsemax_length: typing.Optional[int] = None )
Parameters
max_length (
int
, optional, defaults toNone
) β The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator.gradient_checkpointing (
bool
, optional, defaults toTrue
) β If True, use gradient checkpointing to save memory at the expense of slower backward pass.
RewardConfig collects all training arguments related to the RewardTrainer class.
Using HfArgumentParser
we can turn this class into argparse arguments that can be specified on the command line.
RewardTrainer
class trl.RewardTrainer
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = Noneargs: typing.Optional[trl.trainer.training_configs.RewardConfig] = Nonedata_collator: typing.Optional[DataCollator] = Nonetrain_dataset: typing.Optional[datasets.arrow_dataset.Dataset] = Noneeval_dataset: typing.Union[datasets.arrow_dataset.Dataset, typing.Dict[str, datasets.arrow_dataset.Dataset], NoneType] = Nonetokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = Nonemodel_init: typing.Union[typing.Callable[[], transformers.modeling_utils.PreTrainedModel], NoneType] = Nonecompute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = Nonecallbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = Noneoptimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None)preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = Nonemax_length: typing.Optional[int] = Nonepeft_config: typing.Optional[typing.Dict] = None )
The RewardTrainer can be used to train your custom Reward Model. It is a subclass of the transformers.Trainer
class and inherits all of its attributes and methods. It is recommended to use an AutoModelForSequenceClassification
as the reward model. The reward model should be trained on a dataset of paired examples, where each example is a tuple of two sequences. The reward model should be trained to predict which example in the pair is more relevant to the task at hand.
The reward trainer expects a very specific format for the dataset. The dataset should contain two 4 entries at least if you donβt use the default RewardDataCollatorWithPadding
data collator. The entries should be named
input_ids_chosen
attention_mask_chosen
input_ids_rejected
attention_mask_rejected
Optionally, you can also pass a margin
entry to the dataset. This entry should contain the margin used to modulate the loss of the reward model as outlined in https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/. If you donβt pass a margin, no margin will be used.
SFTTrainer
class trl.SFTTrainer
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str] = Noneargs: TrainingArguments = Nonedata_collator: typing.Optional[DataCollator] = Nonetrain_dataset: typing.Optional[datasets.arrow_dataset.Dataset] = Noneeval_dataset: typing.Union[datasets.arrow_dataset.Dataset, typing.Dict[str, datasets.arrow_dataset.Dataset], NoneType] = Nonetokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = Nonemodel_init: typing.Union[typing.Callable[[], transformers.modeling_utils.PreTrainedModel], NoneType] = Nonecompute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = Nonecallbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = Noneoptimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None)preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = Nonepeft_config: typing.Optional[typing.Dict] = Nonedataset_text_field: typing.Optional[str] = Nonepacking: typing.Optional[bool] = Falseformatting_func: typing.Optional[typing.Callable] = Nonemax_seq_length: typing.Optional[int] = Noneinfinite: typing.Optional[bool] = Falsenum_of_sequences: typing.Optional[int] = 1024chars_per_token: typing.Optional[float] = 3.6dataset_num_proc: typing.Optional[int] = Nonedataset_batch_size: int = 1000 )
Parameters
model (Union[
transformers.PreTrainedModel
,nn.Module
,str
]) β The model to train, can be aPreTrainedModel
, atorch.nn.Module
or a string with the model name to load from cache or download. The model can be also converted to aPeftModel
if aPeftConfig
object is passed to thepeft_config
argument.args (Optionaltransformers.TrainingArguments) β The arguments to tweak for training. Please refer to the official documentation of
transformers.TrainingArguments
for more information.data_collator (Optional
transformers.DataCollator
) β The data collator to use for training.train_dataset (Optionaldatasets.Dataset) β The dataset to use for training. We recommend users to use
trl.trainer.ConstantLengthDataset
to create their dataset.eval_dataset (Optional[Union[
datasets.Dataset
, Dict[str
,datasets.Dataset
]]]) β The dataset to use for evaluation. We recommend users to usetrl.trainer.ConstantLengthDataset
to create their dataset.tokenizer (Optionaltransformers.PreTrainedTokenizer) β The tokenizer to use for training. If not specified, the tokenizer associated to the model will be used.
model_init (
Callable[[], transformers.PreTrainedModel]
) β The model initializer to use for training. If None is specified, the default model initializer will be used.compute_metrics (
Callable[[transformers.EvalPrediction], Dict]
, optional defaults tocompute_accuracy
) β The metrics to use for evaluation. If no metrics are specified, the default metric (compute_accuracy
) will be used.callbacks (
List[transformers.TrainerCallback]
) β The callbacks to use for training.optimizers (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) β The optimizer and scheduler to use for training.preprocess_logits_for_metrics (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) β The function to use to preprocess the logits before computing the metrics.peft_config (
Optional[PeftConfig]
) β The PeftConfig object to use to initialize the PeftModel.dataset_text_field (
Optional[str]
) β The name of the text field of the dataset, in case this is passed by a user, the trainer will automatically create aConstantLengthDataset
based on thedataset_text_field
argument.formatting_func (
Optional[Callable]
) β The formatting function to be used for creating theConstantLengthDataset
.max_seq_length (
Optional[int]
) β The maximum sequence length to use for theConstantLengthDataset
and for automaticallty creating the Dataset. Defaults to512
.infinite (
Optional[bool]
) β Whether to use an infinite dataset or not. Defaults toFalse
.num_of_sequences (
Optional[int]
) β The number of sequences to use for theConstantLengthDataset
. Defaults to1024
.chars_per_token (
Optional[float]
) β The number of characters per token to use for theConstantLengthDataset
. Defaults to3.6
. You can check how this is computed in the stack-llama example: https://github.com/boincai/trl/blob/08f550674c553c36c51d1027613c29f14f3676a5/examples/stack_llama/scripts/supervised_finetuning.py#L53.packing (
Optional[bool]
) β Used only in casedataset_text_field
is passed. This argument is used by theConstantLengthDataset
to pack the sequences of the dataset.dataset_num_proc (
Optional[int]
) β The number of workers to use to tokenize the data. Only used whenpacking=False
. Defaults to None.dataset_batch_size (
int
) β The number of examples to tokenize per batch. If batch_size <= 0 or batch_size == None, tokenize the full dataset as a single batch. Defaults to 1000.
Class definition of the Supervised Finetuning Trainer (SFT Trainer). This class is a wrapper around the transformers.Trainer
class and inherits all of its attributes and methods. The trainer takes care of properly initializing the PeftModel in case a user passes a PeftConfig
object.
DPOTrainer
class trl.DPOTrainer
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = Noneref_model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = Nonebeta: float = 0.1args: TrainingArguments = Nonedata_collator: typing.Optional[DataCollator] = Nonelabel_pad_token_id: int = -100padding_value: int = 0truncation_mode: str = 'keep_end'train_dataset: typing.Optional[datasets.arrow_dataset.Dataset] = Noneeval_dataset: typing.Union[datasets.arrow_dataset.Dataset, typing.Dict[str, datasets.arrow_dataset.Dataset], NoneType] = Nonetokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = Nonemodel_init: typing.Union[typing.Callable[[], transformers.modeling_utils.PreTrainedModel], NoneType] = Nonecallbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = Noneoptimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None)preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = Nonemax_length: typing.Optional[int] = Nonemax_prompt_length: typing.Optional[int] = Nonemax_target_length: typing.Optional[int] = Nonepeft_config: typing.Optional[typing.Dict] = Noneis_encoder_decoder: typing.Optional[bool] = Nonedisable_dropout: bool = Truegenerate_during_eval: bool = Falsecompute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], typing.Dict], NoneType] = None )
Parameters
model (
transformers.PreTrainedModel
) β The model to train, preferably anAutoModelForSequenceClassification
.ref_model (
PreTrainedModelWrapper
) β BOINC AI transformer model with a casual language modelling head. Used for implicit reward computation and loss. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized.beta (
float
, defaults to 0.1) β The beta factor in DPO loss. Higher beta means less divergence from the initial policy.args (
transformers.TrainingArguments
) β The arguments to use for training.data_collator (
transformers.DataCollator
) β The data collator to use for training. If None is specified, the default data collator (DPODataCollatorWithPadding
) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences.label_pad_token_id (
int
, defaults to-100
) β The label pad token id. This argument is required if you want to use the default data collator.padding_value (
int
, defaults to0
) β The padding value. This argument is required if you want to use the default data collator.truncation_mode (
str
, defaults tokeep_end
) β The truncation mode to use, eitherkeep_end
orkeep_start
. This argument is required if you want to use the default data collator.train_dataset (
datasets.Dataset
) β The dataset to use for training.eval_dataset (
datasets.Dataset
) β The dataset to use for evaluation.tokenizer (
transformers.PreTrainedTokenizerBase
) β The tokenizer to use for training. This argument is required if you want to use the default data collator.model_init (
Callable[[], transformers.PreTrainedModel]
) β The model initializer to use for training. If None is specified, the default model initializer will be used.callbacks (
List[transformers.TrainerCallback]
) β The callbacks to use for training.optimizers (
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) β The optimizer and scheduler to use for training.preprocess_logits_for_metrics (
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) β The function to use to preprocess the logits before computing the metrics.max_length (
int
, defaults toNone
) β The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator.max_prompt_length (
int
, defaults toNone
) β The maximum length of the prompt. This argument is required if you want to use the default data collator.max_target_length (
int
, defaults toNone
) β The maximum length of the target. This argument is required if you want to use the default data collator and your model is an encoder-decoder.peft_config (
Dict
, defaults toNone
) β The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model.is_encoder_decoder (
Optional[bool]
,optional
, defaults toNone
) β If no model is provided, we need to know if the model_init returns an encoder-decoder.disable_dropout (
bool
, defaults toTrue
) β Whether or not to disable dropouts inmodel
andref_model
.generate_during_eval (
bool
, defaults toFalse
) β Whether to sample and log generations during evaluation step.compute_metrics (
Callable[[EvalPrediction], Dict]
, optional) β The function to use to compute the metrics. Must take aEvalPrediction
and return a dictionary string to metric values.
Initialize DPOTrainer.
concatenated_forward
( model: Modulebatch: typing.Dict[str, typing.Union[typing.List, torch.LongTensor]] )
Run the given model on the given batch of inputs, concatenating the chosen and rejected inputs together.
We do this to avoid doing two forward passes, because itβs faster for FSDP.
concatenated_inputs
( batch: typing.Dict[str, typing.Union[typing.List, torch.LongTensor]] )
Concatenate the chosen and rejected inputs into a single tensor.
dpo_loss
( policy_chosen_logps: FloatTensorpolicy_rejected_logps: FloatTensorreference_chosen_logps: FloatTensorreference_rejected_logps: FloatTensorreference_free: bool = False ) β A tuple of three tensors
Returns
A tuple of three tensors
(losses, chosen_rewards, rejected_rewards). The losses tensor contains the DPO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively.
Compute the DPO loss for a batch of policy and reference model log probabilities.
evaluation_loop
( dataloader: DataLoaderdescription: strprediction_loss_only: typing.Optional[bool] = Noneignore_keys: typing.Optional[typing.List[str]] = Nonemetric_key_prefix: str = 'eval' )
Overriding built-in evaluation loop to store metrics for each batch. Prediction/evaluation loop, shared by Trainer.evaluate()
and Trainer.predict()
.
Works both with or without labels.
get_batch_metrics
( modelbatch: typing.Dict[str, typing.Union[typing.List, torch.LongTensor]]train_eval: typing.Literal['train', 'eval'] = 'train' )
Compute the DPO loss and other metrics for the given batch of inputs for train or test.
get_batch_samples
( modelbatch: typing.Dict[str, torch.LongTensor] )
Generate samples from the model and reference model for the given batch of inputs.
log
( logs: typing.Dict[str, float] )
Parameters
logs (
Dict[str, float]
) β The values to log.
Log logs
on the various objects watching training, including stored metrics.
DDPOConfig
class trl.DDPOConfig
( exp_name: str = 'doc-buil'run_name: typing.Optional[str] = ''seed: int = 0log_with: typing.Union[typing.Literal['wandb', 'tensorboard'], NoneType] = Nonetracker_kwargs: dict = <factory>accelerator_kwargs: dict = <factory>project_kwargs: dict = <factory>tracker_project_name: str = 'trl'logdir: str = 'logs'num_epochs: int = 100save_freq: int = 1num_checkpoint_limit: int = 5mixed_precision: str = 'fp16'allow_tf32: bool = Trueresume_from: typing.Optional[str] = ''sample_num_steps: int = 50sample_eta: float = 1.0sample_guidance_scale: float = 5.0sample_batch_size: int = 1sample_num_batches_per_epoch: int = 2train_batch_size: int = 1train_use_8bit_adam: bool = Falsetrain_learning_rate: float = 0.0003train_adam_beta1: float = 0.9train_adam_beta2: float = 0.999train_adam_weight_decay: float = 0.0001train_adam_epsilon: float = 1e-08train_gradient_accumulation_steps: int = 1train_max_grad_norm: float = 1.0train_num_inner_epochs: int = 1train_cfg: bool = Truetrain_adv_clip_max: float = 5train_clip_range: float = 0.0001train_timestep_fraction: float = 1.0per_prompt_stat_tracking: bool = Falseper_prompt_stat_tracking_buffer_size: int = 16per_prompt_stat_tracking_min_count: int = 16async_reward_computation: bool = Falsemax_workers: int = 2negative_prompts: typing.Optional[str] = '' )
Configuration class for DDPOTrainer
DDPOTrainer
class trl.DDPOTrainer
( config: DDPOConfigreward_function: typing.Callable[[torch.Tensor, typing.Tuple[str], typing.Tuple[typing.Any]], torch.Tensor]prompt_function: typing.Callable[[], typing.Tuple[str, typing.Any]]sd_pipeline: DDPOStableDiffusionPipelineimage_samples_hook: typing.Union[typing.Callable[[typing.Any, typing.Any, typing.Any], typing.Any], NoneType] = None )
Parameters
**config** (
DDPOConfig
) β Configuration object for DDPOTrainer. Check the documentation ofPPOConfig
for more β details.**reward_function** (Callable[[torch.Tensor, Tuple[str], Tuple[Any]], torch.Tensor]) β Reward function to be used β
**prompt_function** (Callable[[], Tuple[str, Any]]) β Function to generate prompts to guide model β
**sd_pipeline** (
DDPOStableDiffusionPipeline
) β Stable Diffusion pipeline to be used for training. β**image_samples_hook** (Optional[Callable[[Any, Any, Any], Any]]) β Hook to be called to log images β
The DDPOTrainer uses Deep Diffusion Policy Optimization to optimise diffusion models. Note, this trainer is heavily inspired by the work here: https://github.com/kvablack/ddpo-pytorch As of now only Stable Diffusion based pipelines are supported
calculate_loss
( latentstimestepsnext_latentslog_probsadvantagesembeds )
Parameters
latents (torch.Tensor) β The latents sampled from the diffusion model, shape: [batch_size, num_steps, β¦]
timesteps (torch.Tensor) β The timesteps sampled from the diffusion model, shape: [batch_size]
next_latents (torch.Tensor) β The next latents sampled from the diffusion model, shape: [batch_size, num_steps, β¦]
log_probs (torch.Tensor) β The log probabilities of the latents, shape: [batch_size]
advantages (torch.Tensor) β The advantages of the latents, shape: [batch_size]
embeds (torch.Tensor) β The embeddings of the prompts, shape: [2*batch_size or batch_size, β¦] Note: the βorβ is because if train_cfg is True, the expectation is that negative prompts are concatenated to the embeds
Calculate the loss for a batch of an unpacked sample
step
( epoch: intglobal_step: int ) β global_step (int)
Parameters
epoch (int) β The current epoch.
global_step (int) β The current global step.
Returns
global_step (int)
The updated global step.
Perform a single step of training.
Side Effects:
Model weights are updated
Logs the statistics to the accelerator trackers.
If
self.image_samples_callback
is not None, it will be called with the prompt_image_pairs, global_step, and the accelerator tracker.
train
( epochs: typing.Optional[int] = None )
Train the model for a given number of epochs
set_seed
trl.set_seed
( seed: int )
Parameters
seed (
int
) β The seed to set.
Helper function for reproducible behavior to set the seed in random
, numpy
, and torch
.
Last updated