ONNX
Exporting π Transformers models to ONNX
π Transformers provides a transformers.onnx
package that enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects.
See the guide on exporting π Transformers models for more details.
ONNX Configurations
We provide three abstract classes that you should inherit from, depending on the type of model architecture you wish to export:
Encoder-based models inherit from OnnxConfig
Decoder-based models inherit from OnnxConfigWithPast
Encoder-decoder models inherit from OnnxSeq2SeqConfigWithPast
OnnxConfig
class transformers.onnx.OnnxConfig
( config: PretrainedConfigtask: str = 'default'patching_specs: typing.List[transformers.onnx.config.PatchingSpec] = None )
Base class for ONNX exportable model describing metadata on how to export the model through the ONNX format.
flatten_output_collection_property
( name: strfield: typing.Iterable[typing.Any] ) β (Dict[str, Any])
Returns
(Dict[str, Any])
Outputs with flattened structure and key mapping this new structure.
Flatten any potential nested structure expanding the name of the field with the index of the element within the structure.
from_model_config
( config: PretrainedConfigtask: str = 'default' )
Instantiate a OnnxConfig for a specific model
generate_dummy_inputs
( preprocessor: typing.Union[ForwardRef('PreTrainedTokenizerBase'), ForwardRef('FeatureExtractionMixin'), ForwardRef('ImageProcessingMixin')]batch_size: int = -1seq_length: int = -1num_choices: int = -1is_pair: bool = Falseframework: typing.Optional[transformers.utils.generic.TensorType] = Nonenum_channels: int = 3image_width: int = 40image_height: int = 40sampling_rate: int = 22050time_duration: float = 5.0frequency: int = 220tokenizer: PreTrainedTokenizerBase = None )
Parameters
batch_size (
int
, optional, defaults to -1) β The batch size to export the model for (-1 means dynamic axis).num_choices (
int
, optional, defaults to -1) β The number of candidate answers provided for multiple choice task (-1 means dynamic axis).seq_length (
int
, optional, defaults to -1) β The sequence length to export the model for (-1 means dynamic axis).is_pair (
bool
, optional, defaults toFalse
) β Indicate if the input is a pair (sentence 1, sentence 2)framework (
TensorType
, optional, defaults toNone
) β The framework (PyTorch or TensorFlow) that the tokenizer will generate tensors for.num_channels (
int
, optional, defaults to 3) β The number of channels of the generated images.image_width (
int
, optional, defaults to 40) β The width of the generated images.image_height (
int
, optional, defaults to 40) β The height of the generated images.sampling_rate (
int
, optional defaults to 22050) β The sampling rate for audio data generation.time_duration (
float
, optional defaults to 5.0) β Total seconds of sampling for audio data generation.frequency (
int
, optional defaults to 220) β The desired natural frequency of generated audio.
Generate inputs to provide to the ONNX exporter for the specific framework
generate_dummy_inputs_onnxruntime
( reference_model_inputs: typing.Mapping[str, typing.Any] ) β Mapping[str, Tensor]
Parameters
reference_model_inputs ([
Mapping[str, Tensor]
) β Reference inputs for the model.
Returns
Mapping[str, Tensor]
The mapping holding the kwargs to provide to the modelβs forward function
Generate inputs for ONNX Runtime using the reference model inputs. Override this to run inference with seq2seq models which have the encoder and decoder exported as separate ONNX files.
use_external_data_format
( num_parameters: int )
Flag indicating if the model requires using external data format
OnnxConfigWithPast
class transformers.onnx.OnnxConfigWithPast
( config: PretrainedConfigtask: str = 'default'patching_specs: typing.List[transformers.onnx.config.PatchingSpec] = Noneuse_past: bool = False )
fill_with_past_key_values_
( inputs_or_outputs: typing.Mapping[str, typing.Mapping[int, str]]direction: strinverted_values_shape: bool = False )
Fill the input_or_outputs mapping with past_key_values dynamic axes considering.
with_past
( config: PretrainedConfigtask: str = 'default' )
Instantiate a OnnxConfig with use_past
attribute set to True
OnnxSeq2SeqConfigWithPast
class transformers.onnx.OnnxSeq2SeqConfigWithPast
( config: PretrainedConfigtask: str = 'default'patching_specs: typing.List[transformers.onnx.config.PatchingSpec] = Noneuse_past: bool = False )
ONNX Features
Each ONNX configuration is associated with a set of features that enable you to export models for different types of topologies or tasks.
FeaturesManager
class transformers.onnx.FeaturesManager
( )
check_supported_model_or_raise
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')]feature: str = 'default' )
Check whether or not the model has the requested features.
determine_framework
( model: strframework: str = None )
Parameters
model (
str
) β The name of the model to export.framework (
str
, optional, defaults toNone
) β The framework to use for the export. See above for priority if none provided.
Determines the framework to use for the export.
The priority is in the following order:
User input via
framework
.If local checkpoint is provided, use the same framework as the checkpoint.
Available framework in environment, with priority given to PyTorch
get_config
( model_type: strfeature: str ) β OnnxConfig
Parameters
model_type (
str
) β The model type to retrieve the config for.feature (
str
) β The feature to retrieve the config for.
Returns
OnnxConfig
config for the combination
Gets the OnnxConfig for a model_type and feature combination.
get_model_class_for_feature
( feature: strframework: str = 'pt' )
Parameters
feature (
str
) β The feature required.framework (
str
, optional, defaults to"pt"
) β The framework to use for the export.
Attempts to retrieve an AutoModel class from a feature name.
get_model_from_feature
( feature: strmodel: strframework: str = Nonecache_dir: str = None )
Parameters
feature (
str
) β The feature required.model (
str
) β The name of the model to export.framework (
str
, optional, defaults toNone
) β The framework to use for the export. SeeFeaturesManager.determine_framework
for the priority should none be provided.
Attempts to retrieve a model from a modelβs name and the feature to be enabled.
get_supported_features_for_model_type
( model_type: strmodel_name: typing.Optional[str] = None )
Parameters
model_type (
str
) β The model type to retrieve the supported features for.model_name (
str
, optional) β The name attribute of the model object, only used for the exception message.
Tries to retrieve the feature -> OnnxConfig constructor map from the model type.
Last updated