Trainer
Last updated
Last updated
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = Noneargs: ORTTrainingArguments = Nonedata_collator: typing.Optional[DataCollator] = Nonetrain_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = Noneeval_dataset: typing.Union[torch.utils.data.dataset.Dataset, typing.Dict[str, torch.utils.data.dataset.Dataset], NoneType] = Nonetokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = Nonemodel_init: typing.Union[typing.Callable[[], transformers.modeling_utils.PreTrainedModel], NoneType] = Nonecompute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = Nonecallbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = Noneoptimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None)preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = None )
Parameters
model ( or torch.nn.Module
, optional) β The model to train, evaluate or use for predictions. If not provided, a model_init
must be passed.
ORTTrainer
is optimized to work with the provided by the transformers library. You can still use your own models defined as torch.nn.Module
for training with ONNX Runtime backend and inference with PyTorch backend as long as they work the same way as the π Transformers models.
args (ORTTrainingArguments
, optional) β The arguments to tweak for training. Will default to a basic instance of ORTTrainingArguments
with the output_dir
set to a directory named tmp_trainer in the current directory if not provided.
data_collator (DataCollator
, optional) β The function to use to form a batch from a list of elements of train_dataset
or eval_dataset
. Will default to if no tokenizer
is provided, an instance of otherwise.
train_dataset (torch.utils.data.Dataset
or torch.utils.data.IterableDataset
, optional) β The dataset to use for training. If it is a , columns not accepted by the model.forward()
method are automatically removed. Note that if itβs a torch.utils.data.IterableDataset
with some randomization and you are training in a distributed fashion, your iterable dataset should either use a internal attribute generator
that is a torch.Generator
for the randomization that must be identical on all processes (and the ORTTrainer will manually set the seed of this generator
at each epoch) or have a set_epoch()
method that internally sets the seed of the RNGs used.
eval_dataset (Union[torch.utils.data.Dataset
, Dict[str, torch.utils.data.Dataset
]), optional) β The dataset to use for evaluation. If it is a , columns not accepted by the model.forward()
method are automatically removed. If it is a dictionary, it will evaluate on each dataset prepending the dictionary key to the metric name.
tokenizer (, optional) β The tokenizer used to preprocess the data. If provided, will be used to automatically pad the inputs the maximum length when batching inputs, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.
model_init (Callable[[], PreTrainedModel]
, optional) β A function that instantiates the model to be used. If provided, each call to ORTTrainer.train
will start from a new instance of the model as given by this function. The function may have zero argument, or a single one containing the optuna/Ray Tune/SigOpt trial object, to be able to choose different architectures according to hyper parameters (such as layer count, sizes of inner layers, dropout probabilities etc).
compute_metrics (Callable[[EvalPrediction], Dict]
, optional) β The function that will be used to compute metrics at evaluation. Must take a EvalPrediction
and return a dictionary string to metric values.
callbacks (List of TrainerCallback
, optional) β A list of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in . If you want to remove one of the default callbacks used, use the ORTTrainer.remove_callback
method.
optimizers (Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
, optional) β A tuple containing the optimizer and the scheduler to use. Will default to an instance of AdamW
on your model and a scheduler given by get_linear_schedule_with_warmup
controlled by args
.
preprocess_logits_for_metrics (Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
, optional) β A function that preprocess the logits right before caching them at each evaluation step. Must take two tensors, the logits and the labels, and return the logits once processed as desired. The modifications made by this function will be reflected in the predictions received by compute_metrics
. Note that the labels (second parameter) will be None
if the dataset does not have them.
ORTTrainer is a simple but feature-complete training and eval loop for ONNX Runtime, optimized for π Transformers.
Important attributes:
model_wrapped β Always points to the most external model in case one or more other modules wrap the original model. This is the model that should be used for the forward pass. For example, under DeepSpeed
, the inner model is first wrapped in ORTModule
and then in DeepSpeed
and then again in torch.nn.DistributedDataParallel
. If the inner model hasnβt been wrapped, then self.model_wrapped
is the same as self.model
.
is_model_parallel β Whether or not a model has been switched to a model parallel mode (different from data parallelism, this means some of the model layers are split on different GPUs).
place_model_on_device β Whether or not to automatically place the model on the device - it will be set to False
if model parallel or deepspeed is used, or if the default ORTTrainingArguments.place_model_on_device
is overridden to return False
.
is_in_train β Whether or not a model is currently running train
(e.g. when evaluate
is called while in train
)
create_optimizer
( )
Setup the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the ORTTrainerβs init through optimizers
, or subclass and override this method in a subclass.
get_ort_optimizer_cls_and_kwargs
( args: ORTTrainingArguments )
Parameters
args (ORTTrainingArguments
) β The training arguments for the training session.
Returns the optimizer class and optimizer parameters implemented in ONNX Runtime based on ORTTrainingArguments
.
train
( resume_from_checkpoint: typing.Union[str, bool, NoneType] = Nonetrial: typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any]] = Noneignore_keys_for_eval: typing.Optional[typing.List[str]] = None**kwargs )
Parameters
resume_from_checkpoint (str
or bool
, optional) β If a str
, local path to a saved checkpoint as saved by a previous instance of ORTTrainer
. If a bool
and equals True
, load the last checkpoint in args.output_dir as saved by a previous instance of ORTTrainer
. If present, training will resume from the model/optimizer/scheduler states loaded here.
trial (optuna.Trial
or Dict[str, Any]
, optional) β The trial run or the hyperparameter dictionary for hyperparameter search.
ignore_keys_for_eval (List[str]
, optional) β A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training.
kwargs (Dict[str, Any]
, optional) β Additional keyword arguments used to hide deprecated arguments
Main entry point for training with ONNX Runtime accelerator.
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = Noneargs: ORTTrainingArguments = Nonedata_collator: typing.Optional[DataCollator] = Nonetrain_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = Noneeval_dataset: typing.Union[torch.utils.data.dataset.Dataset, typing.Dict[str, torch.utils.data.dataset.Dataset], NoneType] = Nonetokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = Nonemodel_init: typing.Union[typing.Callable[[], transformers.modeling_utils.PreTrainedModel], NoneType] = Nonecompute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = Nonecallbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = Noneoptimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None)preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = None )
evaluate
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = Noneignore_keys: typing.Optional[typing.List[str]] = Nonemetric_key_prefix: str = 'eval'**gen_kwargs )
Parameters
ignore_keys (List[str]
, optional) β A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
metric_key_prefix (str
, optional, defaults to "eval"
) β An optional prefix to be used as the metrics key prefix. For example the metrics βbleuβ will be named βeval_bleuβ if the prefix is "eval"
(default)
max_length (int
, optional) β The maximum target length to use when predicting with the generate method.
num_beams (int
, optional) β Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs β Additional generate
specific kwargs.
Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init compute_metrics
argument).
You can also subclass and override this method to inject custom behavior.
predict
( test_dataset: Datasetignore_keys: typing.Optional[typing.List[str]] = Nonemetric_key_prefix: str = 'test'**gen_kwargs )
Parameters
ignore_keys (List[str]
, optional) β A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
metric_key_prefix (str
, optional, defaults to "eval"
) β An optional prefix to be used as the metrics key prefix. For example the metrics βbleuβ will be named βeval_bleuβ if the prefix is "eval"
(default)
max_length (int
, optional) β The maximum target length to use when predicting with the generate method.
num_beams (int
, optional) β Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs β Additional generate
specific kwargs.
Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in evaluate()
.
If your predictions or labels have different sequence lengths (for instance because youβre doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
predictions (np.ndarray
): The predictions on test_dataset
.
label_ids (np.ndarray
, optional): The labels (if the dataset contained some).
metrics (Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained labels).
( output_dir: stroverwrite_output_dir: bool = Falsedo_train: bool = Falsedo_eval: bool = Falsedo_predict: bool = Falseevaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no'prediction_loss_only: bool = Falseper_device_train_batch_size: int = 8per_device_eval_batch_size: int = 8per_gpu_train_batch_size: typing.Optional[int] = Noneper_gpu_eval_batch_size: typing.Optional[int] = Nonegradient_accumulation_steps: int = 1eval_accumulation_steps: typing.Optional[int] = Noneeval_delay: typing.Optional[float] = 0learning_rate: float = 5e-05weight_decay: float = 0.0adam_beta1: float = 0.9adam_beta2: float = 0.999adam_epsilon: float = 1e-08max_grad_norm: float = 1.0num_train_epochs: float = 3.0max_steps: int = -1lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear'warmup_ratio: float = 0.0warmup_steps: int = 0log_level: typing.Optional[str] = 'passive'log_level_replica: typing.Optional[str] = 'warning'log_on_each_node: bool = Truelogging_dir: typing.Optional[str] = Nonelogging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'logging_first_step: bool = Falselogging_steps: float = 500logging_nan_inf_filter: bool = Truesave_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'save_steps: float = 500save_total_limit: typing.Optional[int] = Nonesave_safetensors: typing.Optional[bool] = Falsesave_on_each_node: bool = Falseno_cuda: bool = Falseuse_cpu: bool = Falseuse_mps_device: bool = Falseseed: int = 42data_seed: typing.Optional[int] = Nonejit_mode_eval: bool = Falseuse_ipex: bool = Falsebf16: bool = Falsefp16: bool = Falsefp16_opt_level: str = 'O1'half_precision_backend: str = 'auto'bf16_full_eval: bool = Falsefp16_full_eval: bool = Falsetf32: typing.Optional[bool] = Nonelocal_rank: int = -1ddp_backend: typing.Optional[str] = Nonetpu_num_cores: typing.Optional[int] = Nonetpu_metrics_debug: bool = Falsedebug: typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = ''dataloader_drop_last: bool = Falseeval_steps: typing.Optional[float] = Nonedataloader_num_workers: int = 0past_index: int = -1run_name: typing.Optional[str] = Nonedisable_tqdm: typing.Optional[bool] = Noneremove_unused_columns: typing.Optional[bool] = Truelabel_names: typing.Optional[typing.List[str]] = Noneload_best_model_at_end: typing.Optional[bool] = Falsemetric_for_best_model: typing.Optional[str] = Nonegreater_is_better: typing.Optional[bool] = Noneignore_data_skip: bool = Falsesharded_ddp: typing.Union[typing.List[transformers.trainer_utils.ShardedDDPOption], str, NoneType] = ''fsdp: typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = ''fsdp_min_num_params: int = 0fsdp_config: typing.Optional[str] = Nonefsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = Nonedeepspeed: typing.Optional[str] = Nonelabel_smoothing_factor: float = 0.0optim: typing.Optional[str] = 'adamw_ba'optim_args: typing.Optional[str] = Noneadafactor: bool = Falsegroup_by_length: bool = Falselength_column_name: typing.Optional[str] = 'length'report_to: typing.Optional[typing.List[str]] = Noneddp_find_unused_parameters: typing.Optional[bool] = Noneddp_bucket_cap_mb: typing.Optional[int] = Noneddp_broadcast_buffers: typing.Optional[bool] = Nonedataloader_pin_memory: bool = Trueskip_memory_metrics: bool = Trueuse_legacy_prediction_loop: bool = Falsepush_to_hub: bool = Falseresume_from_checkpoint: typing.Optional[str] = Nonehub_model_id: typing.Optional[str] = Nonehub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save'hub_token: typing.Optional[str] = Nonehub_private_repo: bool = Falsehub_always_push: bool = Falsegradient_checkpointing: bool = Falseinclude_inputs_for_metrics: bool = Falsefp16_backend: str = 'auto'push_to_hub_model_id: typing.Optional[str] = Nonepush_to_hub_organization: typing.Optional[str] = Nonepush_to_hub_token: typing.Optional[str] = Nonemp_parameters: str = ''auto_find_batch_size: bool = Falsefull_determinism: bool = Falsetorchdynamo: typing.Optional[str] = Noneray_scope: typing.Optional[str] = 'last'ddp_timeout: typing.Optional[int] = 1800torch_compile: bool = Falsetorch_compile_backend: typing.Optional[str] = Nonetorch_compile_mode: typing.Optional[str] = Nonedispatch_batches: typing.Optional[bool] = Noneinclude_tokens_per_second: typing.Optional[bool] = Falseuse_module_with_loss: typing.Optional[bool] = False )
Parameters
optim (str
or training_args.ORTOptimizerNames
or transformers.training_args.OptimizerNames
, optional, defaults to "adamw_ba"
) β The optimizer to use, including optimizers in Transformers: adamw_ba, adamw_torch, adamw_apex_fused, or adafactor. And optimizers implemented by ONNX Runtime: adamw_ort_fused.
( output_dir: stroverwrite_output_dir: bool = Falsedo_train: bool = Falsedo_eval: bool = Falsedo_predict: bool = Falseevaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no'prediction_loss_only: bool = Falseper_device_train_batch_size: int = 8per_device_eval_batch_size: int = 8per_gpu_train_batch_size: typing.Optional[int] = Noneper_gpu_eval_batch_size: typing.Optional[int] = Nonegradient_accumulation_steps: int = 1eval_accumulation_steps: typing.Optional[int] = Noneeval_delay: typing.Optional[float] = 0learning_rate: float = 5e-05weight_decay: float = 0.0adam_beta1: float = 0.9adam_beta2: float = 0.999adam_epsilon: float = 1e-08max_grad_norm: float = 1.0num_train_epochs: float = 3.0max_steps: int = -1lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear'warmup_ratio: float = 0.0warmup_steps: int = 0log_level: typing.Optional[str] = 'passive'log_level_replica: typing.Optional[str] = 'warning'log_on_each_node: bool = Truelogging_dir: typing.Optional[str] = Nonelogging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'logging_first_step: bool = Falselogging_steps: float = 500logging_nan_inf_filter: bool = Truesave_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'save_steps: float = 500save_total_limit: typing.Optional[int] = Nonesave_safetensors: typing.Optional[bool] = Falsesave_on_each_node: bool = Falseno_cuda: bool = Falseuse_cpu: bool = Falseuse_mps_device: bool = Falseseed: int = 42data_seed: typing.Optional[int] = Nonejit_mode_eval: bool = Falseuse_ipex: bool = Falsebf16: bool = Falsefp16: bool = Falsefp16_opt_level: str = 'O1'half_precision_backend: str = 'auto'bf16_full_eval: bool = Falsefp16_full_eval: bool = Falsetf32: typing.Optional[bool] = Nonelocal_rank: int = -1ddp_backend: typing.Optional[str] = Nonetpu_num_cores: typing.Optional[int] = Nonetpu_metrics_debug: bool = Falsedebug: typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = ''dataloader_drop_last: bool = Falseeval_steps: typing.Optional[float] = Nonedataloader_num_workers: int = 0past_index: int = -1run_name: typing.Optional[str] = Nonedisable_tqdm: typing.Optional[bool] = Noneremove_unused_columns: typing.Optional[bool] = Truelabel_names: typing.Optional[typing.List[str]] = Noneload_best_model_at_end: typing.Optional[bool] = Falsemetric_for_best_model: typing.Optional[str] = Nonegreater_is_better: typing.Optional[bool] = Noneignore_data_skip: bool = Falsesharded_ddp: typing.Union[typing.List[transformers.trainer_utils.ShardedDDPOption], str, NoneType] = ''fsdp: typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = ''fsdp_min_num_params: int = 0fsdp_config: typing.Optional[str] = Nonefsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = Nonedeepspeed: typing.Optional[str] = Nonelabel_smoothing_factor: float = 0.0optim: typing.Optional[str] = 'adamw_ba'optim_args: typing.Optional[str] = Noneadafactor: bool = Falsegroup_by_length: bool = Falselength_column_name: typing.Optional[str] = 'length'report_to: typing.Optional[typing.List[str]] = Noneddp_find_unused_parameters: typing.Optional[bool] = Noneddp_bucket_cap_mb: typing.Optional[int] = Noneddp_broadcast_buffers: typing.Optional[bool] = Nonedataloader_pin_memory: bool = Trueskip_memory_metrics: bool = Trueuse_legacy_prediction_loop: bool = Falsepush_to_hub: bool = Falseresume_from_checkpoint: typing.Optional[str] = Nonehub_model_id: typing.Optional[str] = Nonehub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save'hub_token: typing.Optional[str] = Nonehub_private_repo: bool = Falsehub_always_push: bool = Falsegradient_checkpointing: bool = Falseinclude_inputs_for_metrics: bool = Falsefp16_backend: str = 'auto'push_to_hub_model_id: typing.Optional[str] = Nonepush_to_hub_organization: typing.Optional[str] = Nonepush_to_hub_token: typing.Optional[str] = Nonemp_parameters: str = ''auto_find_batch_size: bool = Falsefull_determinism: bool = Falsetorchdynamo: typing.Optional[str] = Noneray_scope: typing.Optional[str] = 'last'ddp_timeout: typing.Optional[int] = 1800torch_compile: bool = Falsetorch_compile_backend: typing.Optional[str] = Nonetorch_compile_mode: typing.Optional[str] = Nonedispatch_batches: typing.Optional[bool] = Noneinclude_tokens_per_second: typing.Optional[bool] = Falseuse_module_with_loss: typing.Optional[bool] = Falsesortish_sampler: bool = Falsepredict_with_generate: bool = Falsegeneration_max_length: typing.Optional[int] = Nonegeneration_num_beams: typing.Optional[int] = Nonegeneration_config: typing.Union[str, pathlib.Path, transformers.generation.configuration_utils.GenerationConfig, NoneType] = None )
Parameters
optim (str
or training_args.ORTOptimizerNames
or transformers.training_args.OptimizerNames
, optional, defaults to "adamw_ba"
) β The optimizer to use, including optimizers in Transformers: adamw_ba, adamw_torch, adamw_apex_fused, or adafactor. And optimizers implemented by ONNX Runtime: adamw_ort_fused.
model β Always points to the core model. If using a transformers model, it will be a subclass.
eval_dataset (Dataset
, optional) β Pass a dataset if you wish to override self.eval_dataset
. If it is an , columns not accepted by the model.forward()
method are automatically removed. It must implement the __len__
method.
test_dataset (Dataset
) β Dataset to run the predictions on. If it is a , columns not accepted by the model.forward()
method are automatically removed. Has to implement the method __len__