Tokenizers
models
Definitions of all models available in Transformers.js.
Example: Load and run an AutoModel
.
Copied
We also provide other AutoModel
s (listed below), which you can use in the same way as the Python library. For example:
Example: Load and run a AutoModelForSeq2SeqLM
.
Copied
static
instance
static
instance
static
inner
models.PreTrainedModel
A base class for pre-trained models that provides the model configuration and an ONNX session.
instance
static
new PreTrainedModel(config, session)
Creates a new instance of the PreTrainedModel
class.
config
Object
The model configuration.
session
any
session for the model.
preTrainedModel.dispose() ⇒ <code> Promise. < Array < unknown > > </code>
Disposes of all the ONNX sessions that were created during inference.
preTrainedModel._call(model_inputs) ⇒ <code> Promise. < Object > </code>
Runs the model with the provided inputs
model_inputs
Object
Object containing input tensors
preTrainedModel.forward(model_inputs) ⇒ <code> Promise. < Object > </code>
Forward method for a pretrained model. If not overridden by a subclass, the correct forward method will be chosen based on the model type.
Error
This method must be implemented in subclasses.
model_inputs
Object
The input data to the model in the format specified in the ONNX model.
preTrainedModel._get_generation_config(generation_config) ⇒ <code> GenerationConfig </code>
This function merges multiple generation configs together to form a final generation config to be used by the model for text generation. It first creates an empty GenerationConfig
object, then it applies the model’s own generation_config
property to it. Finally, if a generation_config
object was passed in the arguments, it overwrites the corresponding properties in the final config with those of the passed config object.
generation_config
GenerationConfig
A GenerationConfig
object containing generation parameters.
preTrainedModel.groupBeams(beams) ⇒ <code> Array </code>
Groups an array of beam objects by their ids.
beams
Array
The array of beam objects to group.
preTrainedModel.getPastKeyValues(decoderResults, pastKeyValues) ⇒ <code> Object </code>
Returns an object containing past key values from the given decoder results object.
decoderResults
Object
The decoder results object.
pastKeyValues
Object
The previous past key values.
preTrainedModel.getAttentions(decoderResults) ⇒ <code> Object </code>
Returns an object containing attentions from the given decoder results object.
decoderResults
Object
The decoder results object.
preTrainedModel.addPastKeyValues(decoderFeeds, pastKeyValues)
Adds past key values to the decoder feeds object. If pastKeyValues is null, creates new tensors for past key values.
decoderFeeds
Object
The decoder feeds object to add past key values to.
pastKeyValues
Object
An object containing past key values.
PreTrainedModel.from_pretrained(pretrained_model_name_or_path, options) ⇒ <code> Promise. < PreTrainedModel > </code>
Instantiate one of the model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path
if possible)
pretrained_model_name_or_path
string
The name or path of the pretrained model. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on boincai.com. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights, e.g.,
./my_model_directory/
.
options
*
Additional options for loading the model.
models.BaseModelOutput
Base class for model’s outputs, with potential hidden states and attentions.
new BaseModelOutput(output)
output
Object
The output of the model.
output.last_hidden_state
Tensor
Sequence of hidden-states at the output of the last layer of the model.
[output.hidden_states]
Tensor
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
[output.attentions]
Tensor
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
models.BertForMaskedLM
BertForMaskedLM is a class representing a BERT model for masked language modeling.
bertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.BertForSequenceClassification
BertForSequenceClassification is a class representing a BERT model for sequence classification.
bertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.BertForTokenClassification
BertForTokenClassification is a class representing a BERT model for token classification.
bertForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.BertForQuestionAnswering
BertForQuestionAnswering is a class representing a BERT model for question answering.
bertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.CamembertModel
The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top.
models.CamembertForMaskedLM
CamemBERT Model with a language modeling
head on top.
camembertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.CamembertForSequenceClassification
CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
camembertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.CamembertForTokenClassification
CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
camembertForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.CamembertForQuestionAnswering
CamemBERT Model with a span classification head on top for extractive question-answering tasks
camembertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaModel
The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.
models.DebertaForMaskedLM
DeBERTa Model with a language modeling
head on top.
debertaForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaForSequenceClassification
DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output)
debertaForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaForTokenClassification
DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
debertaForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaForQuestionAnswering
DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits
and span end logits
).
debertaForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaV2Model
The bare DeBERTa-V2 Model transformer outputting raw hidden-states without any specific head on top.
models.DebertaV2ForMaskedLM
DeBERTa-V2 Model with a language modeling
head on top.
debertaV2ForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaV2ForSequenceClassification
DeBERTa-V2 Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output)
debertaV2ForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaV2ForTokenClassification
DeBERTa-V2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
debertaV2ForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DebertaV2ForQuestionAnswering
DeBERTa-V2 Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits
and span end logits
).
debertaV2ForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DistilBertForSequenceClassification
DistilBertForSequenceClassification is a class representing a DistilBERT model for sequence classification.
distilBertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DistilBertForTokenClassification
DistilBertForTokenClassification is a class representing a DistilBERT model for token classification.
distilBertForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DistilBertForQuestionAnswering
DistilBertForQuestionAnswering is a class representing a DistilBERT model for question answering.
distilBertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.DistilBertForMaskedLM
DistilBertForMaskedLM is a class representing a DistilBERT model for masking task.
distilBertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MobileBertForMaskedLM
MobileBertForMaskedLM is a class representing a MobileBERT model for masking task.
mobileBertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MobileBertForSequenceClassification
MobileBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output)
mobileBertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MobileBertForQuestionAnswering
MobileBert Model with a span classification head on top for extractive question-answering tasks
mobileBertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MPNetModel
The bare MPNet Model transformer outputting raw hidden-states without any specific head on top.
models.MPNetForMaskedLM
MPNetForMaskedLM is a class representing a MPNet model for masked language modeling.
mpNetForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MPNetForSequenceClassification
MPNetForSequenceClassification is a class representing a MPNet model for sequence classification.
mpNetForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MPNetForTokenClassification
MPNetForTokenClassification is a class representing a MPNet model for token classification.
mpNetForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MPNetForQuestionAnswering
MPNetForQuestionAnswering is a class representing a MPNet model for question answering.
mpNetForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.T5ForConditionalGeneration
T5Model is a class representing a T5 model for conditional generation.
new T5ForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the T5ForConditionalGeneration
class.
config
Object
The model configuration.
session
any
session for the model.
decoder_merged_session
any
session for the decoder.
generation_config
GenerationConfig
The generation configuration.
models.LongT5PreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
models.LongT5Model
The bare LONGT5 Model transformer outputting raw hidden-states without any specific head on top.
models.LongT5ForConditionalGeneration
LONGT5 Model with a language modeling
head on top.
new LongT5ForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the LongT5ForConditionalGeneration
class.
config
Object
The model configuration.
session
any
session for the model.
decoder_merged_session
any
session for the decoder.
generation_config
GenerationConfig
The generation configuration.
models.MT5ForConditionalGeneration
A class representing a conditional sequence-to-sequence model based on the MT5 architecture.
new MT5ForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the MT5ForConditionalGeneration
class.
config
any
The model configuration.
session
any
The ONNX session containing the encoder weights.
decoder_merged_session
any
The ONNX session containing the merged decoder weights.
generation_config
GenerationConfig
The generation configuration.
models.BartModel
The bare BART Model outputting raw hidden-states without any specific head on top.
models.BartForConditionalGeneration
The BART Model with a language modeling head. Can be used for summarization.
new BartForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the BartForConditionalGeneration
class.
config
Object
The configuration object for the Bart model.
session
Object
The ONNX session used to execute the model.
decoder_merged_session
Object
The ONNX session used to execute the decoder.
generation_config
Object
The generation configuration object.
models.BartForSequenceClassification
Bart model with a sequence classification/head on top (a linear layer on top of the pooled output)
bartForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MBartModel
The bare MBART Model outputting raw hidden-states without any specific head on top.
models.MBartForConditionalGeneration
The MBART Model with a language modeling head. Can be used for summarization, after fine-tuning the pretrained models.
new MBartForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the MBartForConditionalGeneration
class.
config
Object
The configuration object for the Bart model.
session
Object
The ONNX session used to execute the model.
decoder_merged_session
Object
The ONNX session used to execute the decoder.
generation_config
Object
The generation configuration object.
models.MBartForSequenceClassification
MBart model with a sequence classification/head on top (a linear layer on top of the pooled output).
mBartForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.MBartForCausalLM
new MBartForCausalLM(config, decoder_merged_session, generation_config)
Creates a new instance of the MBartForCausalLM
class.
config
Object
Configuration object for the model.
decoder_merged_session
Object
ONNX Session object for the decoder.
generation_config
Object
Configuration object for the generation process.
models.BlenderbotModel
The bare Blenderbot Model outputting raw hidden-states without any specific head on top.
models.BlenderbotForConditionalGeneration
The Blenderbot Model with a language modeling head. Can be used for summarization.
new BlenderbotForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the BlenderbotForConditionalGeneration
class.
config
any
The model configuration.
session
any
The ONNX session containing the encoder weights.
decoder_merged_session
any
The ONNX session containing the merged decoder weights.
generation_config
GenerationConfig
The generation configuration.
models.BlenderbotSmallModel
The bare BlenderbotSmall Model outputting raw hidden-states without any specific head on top.
models.BlenderbotSmallForConditionalGeneration
The BlenderbotSmall Model with a language modeling head. Can be used for summarization.
new BlenderbotSmallForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the BlenderbotForConditionalGeneration
class.
config
any
The model configuration.
session
any
The ONNX session containing the encoder weights.
decoder_merged_session
any
The ONNX session containing the merged decoder weights.
generation_config
GenerationConfig
The generation configuration.
models.RobertaForMaskedLM
RobertaForMaskedLM class for performing masked language modeling on Roberta models.
robertaForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.RobertaForSequenceClassification
RobertaForSequenceClassification class for performing sequence classification on Roberta models.
robertaForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.RobertaForTokenClassification
RobertaForTokenClassification class for performing token classification on Roberta models.
robertaForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.RobertaForQuestionAnswering
RobertaForQuestionAnswering class for performing question answering on Roberta models.
robertaForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMPreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
models.XLMModel
The bare XLM Model transformer outputting raw hidden-states without any specific head on top.
models.XLMWithLMHeadModel
The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
xlmWithLMHeadModel._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMForSequenceClassification
XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output)
xlmForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMForTokenClassification
XLM Model with a token classification head on top (a linear layer on top of the hidden-states output)
xlmForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMForQuestionAnswering
XLM Model with a span classification head on top for extractive question-answering tasks
xlmForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMRobertaForMaskedLM
XLMRobertaForMaskedLM class for performing masked language modeling on XLMRoberta models.
xlmRobertaForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMRobertaForSequenceClassification
XLMRobertaForSequenceClassification class for performing sequence classification on XLMRoberta models.
xlmRobertaForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMRobertaForTokenClassification
XLMRobertaForTokenClassification class for performing token classification on XLMRoberta models.
xlmRobertaForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.XLMRobertaForQuestionAnswering
XLMRobertaForQuestionAnswering class for performing question answering on XLMRoberta models.
xlmRobertaForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.WhisperModel
WhisperModel class for training Whisper models without a language model head.
models.WhisperForConditionalGeneration
WhisperForConditionalGeneration class for generating conditional outputs from Whisper models.
new WhisperForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the WhisperForConditionalGeneration
class.
config
Object
Configuration object for the model.
session
Object
ONNX Session object for the model.
decoder_merged_session
Object
ONNX Session object for the decoder.
generation_config
Object
Configuration object for the generation process.
whisperForConditionalGeneration.generate(inputs, generation_config, logits_processor) ⇒ <code> Promise. < Object > </code>
Generates outputs based on input and generation configuration.
inputs
Object
Input data for the model.
generation_config
WhisperGenerationConfig
Configuration object for the generation process.
logits_processor
Object
Optional logits processor object.
whisperForConditionalGeneration._extract_token_timestamps(generate_outputs, alignment_heads, [num_frames], [time_precision]) ⇒ <code> Tensor </code>
Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to map each output token to a position in the input audio.
generate_outputs
Object
Outputs generated by the model
generate_outputs.cross_attentions
Array.<Array<Array<Tensor>>>
The cross attentions output by the model
generate_outputs.decoder_attentions
Array.<Array<Array<Tensor>>>
The decoder attentions output by the model
generate_outputs.sequences
Array.<Array<number>>
The sequences output by the model
alignment_heads
Array.<Array<number>>
Alignment heads of the model
[num_frames]
number
Number of frames in the input audio.
[time_precision]
number
0.02
Precision of the timestamps in seconds
models.VisionEncoderDecoderModel
Vision Encoder-Decoder model based on OpenAI’s GPT architecture for image captioning and other vision tasks
new VisionEncoderDecoderModel(config, session, decoder_merged_session, generation_config)
Creates a new instance of the VisionEncoderDecoderModel
class.
config
Object
The configuration object specifying the hyperparameters and other model settings.
session
Object
The ONNX session containing the encoder model.
decoder_merged_session
any
The ONNX session containing the merged decoder model.
generation_config
Object
Configuration object for the generation process.
models.CLIPModel
CLIP Text and Vision Model with a projection layers on top
Example: Perform zero-shot image classification with a CLIPModel
.
Copied
models.CLIPTextModelWithProjection
CLIP Text Model with a projection layer on top (a linear layer on top of the pooled output)
Example: Compute text embeddings with CLIPTextModelWithProjection
.
Copied
CLIPTextModelWithProjection.from_pretrained() : <code> PreTrainedModel.from_pretrained </code>
models.CLIPVisionModelWithProjection
CLIP Vision Model with a projection layer on top (a linear layer on top of the pooled output)
Example: Compute vision embeddings with CLIPVisionModelWithProjection
.
Copied
CLIPVisionModelWithProjection.from_pretrained() : <code> PreTrainedModel.from_pretrained </code>
models.GPT2PreTrainedModel
new GPT2PreTrainedModel(config, session, generation_config)
Creates a new instance of the GPT2PreTrainedModel
class.
config
Object
The configuration of the model.
session
any
The ONNX session containing the model weights.
generation_config
GenerationConfig
The generation configuration.
models.GPT2LMHeadModel
GPT-2 language model head on top of the GPT-2 base model. This model is suitable for text generation tasks.
models.GPTNeoPreTrainedModel
new GPTNeoPreTrainedModel(config, session, generation_config)
Creates a new instance of the GPTNeoPreTrainedModel
class.
config
Object
The configuration of the model.
session
any
The ONNX session containing the model weights.
generation_config
GenerationConfig
The generation configuration.
models.GPTNeoXPreTrainedModel
new GPTNeoXPreTrainedModel(config, session, generation_config)
Creates a new instance of the GPTNeoXPreTrainedModel
class.
config
Object
The configuration of the model.
session
any
The ONNX session containing the model weights.
generation_config
GenerationConfig
The generation configuration.
models.GPTJPreTrainedModel
new GPTJPreTrainedModel(config, session, generation_config)
Creates a new instance of the GPTJPreTrainedModel
class.
config
Object
The configuration of the model.
session
any
The ONNX session containing the model weights.
generation_config
GenerationConfig
The generation configuration.
models.GPTBigCodePreTrainedModel
new GPTBigCodePreTrainedModel(config, session, generation_config)
Creates a new instance of the GPTBigCodePreTrainedModel
class.
config
Object
The configuration of the model.
session
any
The ONNX session containing the model weights.
generation_config
GenerationConfig
The generation configuration.
models.CodeGenPreTrainedModel
new CodeGenPreTrainedModel(config, session, generation_config)
Creates a new instance of the CodeGenPreTrainedModel
class.
config
Object
The model configuration object.
session
Object
The ONNX session object.
generation_config
GenerationConfig
The generation configuration.
models.CodeGenModel
CodeGenModel is a class representing a code generation model without a language model head.
models.CodeGenForCausalLM
CodeGenForCausalLM is a class that represents a code generation model based on the GPT-2 architecture. It extends the CodeGenPreTrainedModel
class.
models.LlamaPreTrainedModel
The bare LLama Model outputting raw hidden-states without any specific head on top.
new LlamaPreTrainedModel(config, session, generation_config)
Creates a new instance of the LlamaPreTrainedModel
class.
config
Object
The model configuration object.
session
Object
The ONNX session object.
generation_config
GenerationConfig
The generation configuration.
models.LlamaModel
The bare LLaMA Model outputting raw hidden-states without any specific head on top.
models.BloomPreTrainedModel
The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
new BloomPreTrainedModel(config, session, generation_config)
Creates a new instance of the BloomPreTrainedModel
class.
config
Object
The configuration of the model.
session
any
The ONNX session containing the model weights.
generation_config
GenerationConfig
The generation configuration.
models.BloomModel
The bare Bloom Model transformer outputting raw hidden-states without any specific head on top.
models.BloomForCausalLM
The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
models.MptPreTrainedModel
new MptPreTrainedModel(config, session, generation_config)
Creates a new instance of the MptPreTrainedModel
class.
config
Object
The model configuration object.
session
Object
The ONNX session object.
generation_config
GenerationConfig
The generation configuration.
models.MptModel
The bare Mpt Model transformer outputting raw hidden-states without any specific head on top.
models.MptForCausalLM
The MPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
models.OPTPreTrainedModel
new OPTPreTrainedModel(config, session, generation_config)
Creates a new instance of the OPTPreTrainedModel
class.
config
Object
The model configuration object.
session
Object
The ONNX session object.
generation_config
GenerationConfig
The generation configuration.
models.OPTModel
The bare OPT Model outputting raw hidden-states without any specific head on top.
models.OPTForCausalLM
The OPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
models.DetrObjectDetectionOutput
new DetrObjectDetectionOutput(output)
output
Object
The output of the model.
output.logits
Tensor
Classification logits (including no-object) for all queries.
output.pred_boxes
Tensor
Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding).
models.DetrSegmentationOutput
new DetrSegmentationOutput(output)
output
Object
The output of the model.
output.logits
Tensor
The output logits of the model.
output.pred_boxes
Tensor
Predicted boxes.
output.pred_masks
Tensor
Predicted masks.
models.ResNetPreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
models.ResNetModel
The bare ResNet model outputting raw features without any specific head on top.
models.ResNetForImageClassification
ResNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.
resNetForImageClassification._call(model_inputs)
model_inputs
any
models.DonutSwinModel
The bare Donut Swin Model transformer outputting raw hidden-states without any specific head on top.
Example: Step-by-step Document Parsing.
Copied
Example: Step-by-step Document Visual Question Answering (DocVQA)
Copied
models.YolosObjectDetectionOutput
new YolosObjectDetectionOutput(output)
output
Object
The output of the model.
output.logits
Tensor
Classification logits (including no-object) for all queries.
output.pred_boxes
Tensor
Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding).
models.SamImageSegmentationOutput
Base class for Segment-Anything model’s output.
new SamImageSegmentationOutput(output)
output
Object
The output of the model.
output.iou_scores
Tensor
The output logits of the model.
output.pred_masks
Tensor
Predicted boxes.
models.MarianMTModel
new MarianMTModel(config, session, decoder_merged_session, generation_config)
Creates a new instance of the MarianMTModel
class.
config
Object
The model configuration object.
session
Object
The ONNX session object.
decoder_merged_session
any
generation_config
any
models.M2M100ForConditionalGeneration
new M2M100ForConditionalGeneration(config, session, decoder_merged_session, generation_config)
Creates a new instance of the M2M100ForConditionalGeneration
class.
config
Object
The model configuration object.
session
Object
The ONNX session object.
decoder_merged_session
any
generation_config
any
models.Wav2Vec2Model
The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top.
Example: Load and run an Wav2Vec2Model
for feature extraction.
Copied
models.WavLMPreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
models.WavLMModel
The bare WavLM Model transformer outputting raw hidden-states without any specific head on top.
Example: Load and run an WavLMModel
for feature extraction.
Copied
models.WavLMForCTC
WavLM Model with a language modeling
head on top for Connectionist Temporal Classification (CTC).
wavLMForCTC._call(model_inputs)
model_inputs
Object
model_inputs.input_values
Tensor
Float values of input raw speech waveform.
model_inputs.attention_mask
Tensor
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
models.WavLMForSequenceClassification
WavLM Model with a sequence classification head on top (a linear layer over the pooled output).
wavLMForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code>
Calls the model on new inputs.
model_inputs
Object
The inputs to the model.
models.SpeechT5PreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.
models.SpeechT5Model
The bare SpeechT5 Encoder-Decoder Model outputting raw hidden-states without any specific pre- or post-nets.
models.SpeechT5ForSpeechToText
SpeechT5 Model with a speech encoder and a text decoder.
models.SpeechT5ForTextToSpeech
SpeechT5 Model with a text encoder and a speech decoder.
new SpeechT5ForTextToSpeech(config, session, decoder_merged_session, generation_config)
Creates a new instance of the SpeechT5ForTextToSpeech
class.
config
Object
The model configuration.
session
any
session for the model.
decoder_merged_session
any
session for the decoder.
generation_config
GenerationConfig
The generation configuration.
speechT5ForTextToSpeech.generate_speech(input_values, speaker_embeddings, options) ⇒ <code> Promise. < SpeechOutput > </code>
Converts a sequence of input tokens into a sequence of mel spectrograms, which are subsequently turned into a speech waveform using a vocoder.
input_values
Tensor
Indices of input sequence tokens in the vocabulary.
speaker_embeddings
Tensor
Tensor containing the speaker embeddings.
options
Object
Optional parameters for generating speech.
[options.threshold]
number
0.5
The generated sequence ends when the predicted stop token probability exceeds this value.
[options.minlenratio]
number
0.0
Used to calculate the minimum required length for the output sequence.
[options.maxlenratio]
number
20.0
Used to calculate the maximum allowed length for the output sequence.
[options.vocoder]
Object
The vocoder that converts the mel spectrogram into a speech waveform. If null
, the output is the mel spectrogram.
[options.output_cross_attentions]
boolean
false
Whether or not to return the attentions tensors of the decoder's cross-attention layers.
models.SpeechT5HifiGan
HiFi-GAN vocoder.
models.PretrainedMixin
Base class of all AutoModels. Contains the from_pretrained
function which is used to instantiate pretrained models.
instance
static
pretrainedMixin.MODEL_CLASS_MAPPINGS : <code> * </code>
Mapping from model type to model class.
pretrainedMixin.BASE_IF_FAIL
Whether to attempt to instantiate the base class (PretrainedModel
) if the model type is not found in the mapping.
PretrainedMixin.from_pretrained() : <code> PreTrainedModel.from_pretrained </code>
models.AutoModel
Helper class which is used to instantiate pretrained models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForSequenceClassification
Helper class which is used to instantiate pretrained sequence classification models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForTokenClassification
Helper class which is used to instantiate pretrained token classification models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForSeq2SeqLM
Helper class which is used to instantiate pretrained sequence-to-sequence models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForSpeechSeq2Seq
Helper class which is used to instantiate pretrained sequence-to-sequence speech-to-text models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForTextToSpectrogram
Helper class which is used to instantiate pretrained sequence-to-sequence text-to-spectrogram models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForCausalLM
Helper class which is used to instantiate pretrained causal language models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForMaskedLM
Helper class which is used to instantiate pretrained masked language models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForQuestionAnswering
Helper class which is used to instantiate pretrained question answering models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForVision2Seq
Helper class which is used to instantiate pretrained vision-to-sequence models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForImageClassification
Helper class which is used to instantiate pretrained image classification models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForImageSegmentation
Helper class which is used to instantiate pretrained image segmentation models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForObjectDetection
Helper class which is used to instantiate pretrained object detection models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.AutoModelForMaskGeneration
Helper class which is used to instantiate pretrained object detection models with the from_pretrained
function. The chosen model class is determined by the type specified in the model config.
models.Seq2SeqLMOutput
new Seq2SeqLMOutput(output)
output
Object
The output of the model.
output.logits
Tensor
The output logits of the model.
output.past_key_values
Tensor
An tensor of key/value pairs that represent the previous state of the model.
output.encoder_outputs
Tensor
The output of the encoder in a sequence-to-sequence model.
[output.decoder_attentions]
Tensor
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
[output.cross_attentions]
Tensor
Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
models.SequenceClassifierOutput
Base class for outputs of sentence classification models.
new SequenceClassifierOutput(output)
output
Object
The output of the model.
output.logits
Tensor
classification (or regression if config.num_labels==1) scores (before SoftMax).
models.TokenClassifierOutput
Base class for outputs of token classification models.
new TokenClassifierOutput(output)
output
Object
The output of the model.
output.logits
Tensor
Classification scores (before SoftMax).
models.MaskedLMOutput
Base class for masked language models outputs.
new MaskedLMOutput(output)
output
Object
The output of the model.
output.logits
Tensor
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
models.QuestionAnsweringModelOutput
Base class for outputs of question answering models.
new QuestionAnsweringModelOutput(output)
output
Object
The output of the model.
output.start_logits
Tensor
Span-start scores (before SoftMax).
output.end_logits
Tensor
Span-end scores (before SoftMax).
models.CausalLMOutput
Base class for causal language model (or autoregressive) outputs.
new CausalLMOutput(output)
output
Object
The output of the model.
output.logits
Tensor
Prediction scores of the language modeling head (scores for each vocabulary token before softmax).
models.CausalLMOutputWithPast
Base class for causal language model (or autoregressive) outputs.
new CausalLMOutputWithPast(output)
output
Object
The output of the model.
output.logits
Tensor
Prediction scores of the language modeling head (scores for each vocabulary token before softmax).
output.past_key_values
Tensor
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
models~TypedArray : <code> * </code>
models~DecoderOutput ⇒ <code> Promise. < (Array < Array < number > > |EncoderDecoderOutput|DecoderOutput) > </code>
Generates text based on the given inputs and generation configuration using the model.
Error
Throws an error if the inputs array is empty.
inputs
Tensor
| Array
| TypedArray
An array of input token IDs.
generation_config
Object
| GenerationConfig
| null
The generation configuration to use. If null, default configuration will be used.
logits_processor
Object
| null
An optional logits processor to use. If null, a new LogitsProcessorList instance will be created.
options
Object
options
[options.inputs_attention_mask]
Object
An optional attention mask for the inputs.
models~WhisperGenerationConfig : <code> Object </code>
[return_timestamps]
boolean
Whether to return the timestamps with the text. This enables the WhisperTimestampsLogitsProcessor
.
[return_token_timestamps]
boolean
Whether to return token-level timestamps with the text. This can be used with or without the return_timestamps
option. To get word-level timestamps, use the tokenizer to group the tokens into words.
[num_frames]
number
The number of audio frames available in this chunk. This is only used generating word-level timestamps.
models~SpeechOutput : <code> Object </code>
[spectrogram]
Tensor
The predicted log-mel spectrogram of shape (output_sequence_length, config.num_mel_bins)
. Returned when no vocoder
is provided
[waveform]
Tensor
The predicted waveform of shape (num_frames,)
. Returned when a vocoder
is provided.
[cross_attentions]
Tensor
The outputs of the decoder's cross-attention layers of shape (config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length)
. returned when output_cross_attentions
is true
.
Last updated