Wav2Vec2
Last updated
Last updated
The Wav2Vec2 model was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The abstract from the paper is the following:
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
Tips:
Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using .
This model was contributed by .
A list of official BOINC AI and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Audio Classification
A notebook on how to . 🌎
is supported by this and .
Automatic Speech Recognition
🚀 Deploy
( vocab_size = 32hidden_size = 768num_hidden_layers = 12num_attention_heads = 12intermediate_size = 3072hidden_act = 'gelu'hidden_dropout = 0.1activation_dropout = 0.1attention_dropout = 0.1feat_proj_dropout = 0.0feat_quantizer_dropout = 0.0final_dropout = 0.1layerdrop = 0.1initializer_range = 0.02layer_norm_eps = 1e-05feat_extract_norm = 'group'feat_extract_activation = 'gelu'conv_dim = (512, 512, 512, 512, 512, 512, 512)conv_stride = (5, 2, 2, 2, 2, 2, 2)conv_kernel = (10, 3, 3, 3, 3, 2, 2)conv_bias = Falsenum_conv_pos_embeddings = 128num_conv_pos_embedding_groups = 16do_stable_layer_norm = Falseapply_spec_augment = Truemask_time_prob = 0.05mask_time_length = 10mask_time_min_masks = 2mask_feature_prob = 0.0mask_feature_length = 10mask_feature_min_masks = 0num_codevectors_per_group = 320num_codevector_groups = 2contrastive_logits_temperature = 0.1num_negatives = 100codevector_dim = 256proj_codevector_dim = 256diversity_loss_weight = 0.1ctc_loss_reduction = 'sum'ctc_zero_infinity = Falseuse_weighted_layer_sum = Falseclassifier_proj_size = 256tdnn_dim = (512, 512, 512, 512, 1500)tdnn_kernel = (5, 3, 3, 1, 1)tdnn_dilation = (1, 2, 3, 1, 1)xvector_output_dim = 512pad_token_id = 0bos_token_id = 1eos_token_id = 2add_adapter = Falseadapter_kernel_size = 3adapter_stride = 2num_adapter_layers = 3output_hidden_size = Noneadapter_attn_dim = None**kwargs )
Parameters
hidden_size (int
, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int
, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int
, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int
, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str
or function
, optional, defaults to "gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu"
, "relu"
, "selu"
and "gelu_new"
are supported.
hidden_dropout (float
, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
activation_dropout (float
, optional, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer.
attention_dropout (float
, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
initializer_range (float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float
, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
feat_extract_norm (str
, optional, defaults to "group"
) — The norm to be applied to 1D convolutional layers in feature encoder. One of "group"
for group normalization of only the first 1D convolutional layer or "layer"
for layer normalization of all 1D convolutional layers.
feat_proj_dropout (float
, optional, defaults to 0.0) — The dropout probability for output of the feature encoder.
feat_extract_activation (str,
optional, defaults to
“gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string,
“gelu”,
“relu”,
“selu”and
“gelu_new”` are supported.
feat_quantizer_dropout (float
, optional, defaults to 0.0) — The dropout probabilitiy for quantized feature encoder states.
conv_dim (Tuple[int]
or List[int]
, optional, defaults to (512, 512, 512, 512, 512, 512, 512)
) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int]
or List[int]
, optional, defaults to (5, 2, 2, 2, 2, 2, 2)
) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int]
or List[int]
, optional, defaults to (10, 3, 3, 3, 3, 3, 3)
) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of conv_kernel defines the number of convolutional layers and has to match the length of conv_dim.
conv_bias (bool
, optional, defaults to False
) — Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int
, optional, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.
num_conv_pos_embedding_groups (int
, optional, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (bool
, optional, defaults to False
) — Whether to apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True
corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False
corresponds to applying layer norm after the attention layer.
mask_time_prob (float
, optional, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if
apply_spec_augment is True`.
mask_time_length (int
, optional, defaults to 10) — Length of vector span along the time axis.
mask_time_min_masks (int
, optional, defaults to 2), — The minimum number of masks of length mask_feature_length
generated along the time axis, each time step, irrespectively of mask_feature_prob
. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks”
mask_feature_prob (float
, optional, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if
apply_spec_augment is True`.
mask_feature_length (int
, optional, defaults to 10) — Length of vector span along the feature axis.
mask_feature_min_masks (int
, optional, defaults to 0), — The minimum number of masks of length mask_feature_length
generated along the feature axis, each time step, irrespectively of mask_feature_prob
. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
num_codevectors_per_group (int
, optional, defaults to 320) — Number of entries in each quantization codebook (group).
num_codevector_groups (int
, optional, defaults to 2) — Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (float
, optional, defaults to 0.1) — The temperature kappa in the contrastive loss.
feat_quantizer_dropout (float
, optional, defaults to 0.0) — The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.
num_negatives (int
, optional, defaults to 100) — Number of negative samples for the contrastive loss.
codevector_dim (int
, optional, defaults to 256) — Dimensionality of the quantized feature vectors.
proj_codevector_dim (int
, optional, defaults to 256) — Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (int
, optional, defaults to 0.1) — The weight of the codebook diversity loss component.
classifier_proj_size (int
, optional, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (Tuple[int]
or List[int]
, optional, defaults to (512, 512, 512, 512, 1500)
) — A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_dim defines the number of TDNN layers.
tdnn_kernel (Tuple[int]
or List[int]
, optional, defaults to (5, 3, 3, 1, 1)
) — A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_kernel has to match the length of tdnn_dim.
tdnn_dilation (Tuple[int]
or List[int]
, optional, defaults to (1, 2, 3, 1, 1)
) — A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the XVector model. The length of tdnn_dilation has to match the length of tdnn_dim.
xvector_output_dim (int
, optional, defaults to 512) — Dimensionality of the XVector embedding vectors.
add_adapter (bool
, optional, defaults to False
) — Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for warm-starting Wav2Vec2 for SpeechEncoderDecoder models.
adapter_kernel_size (int
, optional, defaults to 3) — Kernel size of the convolutional layers in the adapter network. Only relevant if add_adapter is True
.
adapter_stride (int
, optional, defaults to 2) — Stride of the convolutional layers in the adapter network. Only relevant if add_adapter is True
.
num_adapter_layers (int
, optional, defaults to 3) — Number of convolutional layers that should be used in the adapter network. Only relevant if add_adapter is True
.
output_hidden_size (int
, optional) — Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant if add_adapter is True
.
Example:
Copied
( vocab_filebos_token = '<s>'eos_token = '</s>'unk_token = '<unk>'pad_token = '<pad>'word_delimiter_token = '|'replace_word_delimiter_char = ' 'do_lower_case = Falsetarget_lang = None**kwargs )
Parameters
vocab_file (str
) — File containing the vocabulary.
bos_token (str
, optional, defaults to "<s>"
) — The beginning of sentence token.
eos_token (str
, optional, defaults to "</s>"
) — The end of sentence token.
unk_token (str
, optional, defaults to "<unk>"
) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str
, optional, defaults to "<pad>"
) — The token used for padding, for example when batching sequences of different lengths.
word_delimiter_token (str
, optional, defaults to "|"
) — The token used for defining the end of a word.
do_lower_case (bool
, optional, defaults to False
) — Whether or not to accept lowercase input and lowercase the output when decoding.
Constructs a Wav2Vec2CTC tokenizer.
__call__
Parameters
text (str
, List[str]
, List[List[str]]
, optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True
(to lift the ambiguity with a batch of sequences).
text_pair (str
, List[str]
, List[List[str]]
, optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True
(to lift the ambiguity with a batch of sequences).
text_target (str
, List[str]
, List[List[str]]
, optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True
(to lift the ambiguity with a batch of sequences).
text_pair_target (str
, List[str]
, List[List[str]]
, optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True
(to lift the ambiguity with a batch of sequences).
add_special_tokens (bool
, optional, defaults to True
) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens
function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos
or eos
tokens automatically.
True
or 'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length'
: Pad to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided.
False
or 'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
True
or 'longest_first'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second'
: Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False
or 'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int
, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int
, optional, defaults to 0) — If set to a number along with max_length
, the overflowing tokens returned when return_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool
, optional, defaults to False
) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int
, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding
to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5
(Volta).
'tf'
: Return TensorFlow tf.constant
objects.
'pt'
: Return PyTorch torch.Tensor
objects.
'np'
: Return Numpy np.ndarray
objects.
return_token_type_ids (bool
, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs
attribute.
return_attention_mask (bool
, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs
attribute.
return_overflowing_tokens (bool
, optional, defaults to False
) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first
or True
, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool
, optional, defaults to False
) — Whether or not to return special tokens mask information.
return_offsets_mapping (bool
, optional, defaults to False
) — Whether or not to return (char_start, char_end)
for each token.
return_length (bool
, optional, defaults to False
) — Whether or not to return the lengths of the encoded inputs.
verbose (bool
, optional, defaults to True
) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize()
method
Returns
input_ids — List of token ids to be fed to a model.
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True
or if “token_type_ids” is in self.model_input_names
).
attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True
or if “attention_mask” is in self.model_input_names
).
overflowing_tokens — List of overflowing tokens sequences (when a max_length
is specified and return_overflowing_tokens=True
).
num_truncated_tokens — Number of tokens truncated (when a max_length
is specified and return_overflowing_tokens=True
).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True
and return_special_tokens_mask=True
).
length — The length of the inputs (when return_length=True
)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
save_vocabulary
( save_directory: strfilename_prefix: typing.Optional[str] = None )
decode
( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]skip_special_tokens: bool = Falseclean_up_tokenization_spaces: bool = Noneoutput_char_offsets: bool = Falseoutput_word_offsets: bool = False**kwargs ) → str
or Wav2Vec2CTCTokenizerOutput
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the __call__
method.
skip_special_tokens (bool
, optional, defaults to False
) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool
, optional) — Whether or not to clean up the tokenization spaces.
output_char_offsets (bool
, optional, defaults to False
) — Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.
Please take a look at the example below to better understand how to make use of output_char_offsets
.
output_word_offsets (bool
, optional, defaults to False
) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.
Please take a look at the example below to better understand how to make use of output_word_offsets
.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
Returns
str
or Wav2Vec2CTCTokenizerOutput
The list of decoded sentences. Will be a Wav2Vec2CTCTokenizerOutput
when output_char_offsets == True
or output_word_offsets == True
.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))
.
Example:
Copied
batch_decode
( sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]skip_special_tokens: bool = Falseclean_up_tokenization_spaces: bool = Noneoutput_char_offsets: bool = Falseoutput_word_offsets: bool = False**kwargs ) → List[str]
or Wav2Vec2CTCTokenizerOutput
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the __call__
method.
skip_special_tokens (bool
, optional, defaults to False
) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool
, optional) — Whether or not to clean up the tokenization spaces.
output_char_offsets (bool
, optional, defaults to False
) — Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.
output_word_offsets (bool
, optional, defaults to False
) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
Returns
List[str]
or Wav2Vec2CTCTokenizerOutput
The list of decoded sentences. Will be a Wav2Vec2CTCTokenizerOutput
when output_char_offsets == True
or output_word_offsets == True
.
Convert a list of lists of token ids into a list of strings by calling decode.
set_target_lang
( target_lang: str )
Set the target language of a nested multi-lingual dictionary
( feature_size = 1sampling_rate = 16000padding_value = 0.0return_attention_mask = Falsedo_normalize = True**kwargs )
Parameters
feature_size (int
, defaults to 1) — The feature dimension of the extracted features.
sampling_rate (int
, defaults to 16000) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (float
, defaults to 0.0) — The value that is used to fill the padding values.
Constructs a Wav2Vec2 feature extractor.
__call__
( raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = Falsemax_length: typing.Optional[int] = Nonetruncation: bool = Falsepad_to_multiple_of: typing.Optional[int] = Nonereturn_attention_mask: typing.Optional[bool] = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonesampling_rate: typing.Optional[int] = None**kwargs )
Parameters
raw_speech (np.ndarray
, List[float]
, List[np.ndarray]
, List[List[float]]
) — The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep.
True
or 'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length'
: Pad to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided.
False
or 'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (int
, optional) — Maximum length of the returned list and optionally padding length (see above).
truncation (bool
) — Activates truncation to cut input sequences longer than max_length to max_length.
pad_to_multiple_of (int
, optional) — If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (bool
, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor’s default.
'tf'
: Return TensorFlow tf.constant
objects.
'pt'
: Return PyTorch torch.Tensor
objects.
'np'
: Return Numpy np.ndarray
objects.
sampling_rate (int
, optional) — The sampling rate at which the raw_speech
input was sampled. It is strongly recommended to pass sampling_rate
at the forward call to prevent silent errors.
padding_value (float
, defaults to 0.0) —
Main method to featurize and prepare for the model one or several sequence(s).
( feature_extractortokenizer )
Parameters
Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single processor.
__call__
( *args**kwargs )
pad
( *args**kwargs )
from_pretrained
( pretrained_model_name_or_path**kwargs )
save_pretrained
( save_directorypush_to_hub: bool = False**kwargs )
Parameters
save_directory (str
or os.PathLike
) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).
push_to_hub (bool
, optional, defaults to False
) — Whether or not to push your model to the BOINC AI model hub after saving it. You can specify the repository you want to push to with repo_id
(will default to the name of save_directory
in your namespace).
batch_decode
( *args**kwargs )
decode
( *args**kwargs )
( feature_extractor: FeatureExtractionMixintokenizer: PreTrainedTokenizerBasedecoder: BeamSearchDecoderCTC )
Parameters
decoder (pyctcdecode.BeamSearchDecoderCTC
) — An instance of pyctcdecode.BeamSearchDecoderCTC
. The decoder is a required input.
Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor, a Wav2Vec2 CTC tokenizer and a decoder with language model support into a single processor for language model boosted speech recognition decoding.
__call__
( *args**kwargs )
pad
( *args**kwargs )
from_pretrained
( pretrained_model_name_or_path**kwargs )
Parameters
pretrained_model_name_or_path (str
or os.PathLike
) — This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on boincai.com. Valid model ids can be located at the root-level, like bert-base-uncased
, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased
.
Please refer to the docstrings of the methods above for more information.
save_pretrained
( save_directory )
batch_decode
( logits: ndarraypool: typing.Union[<bound method BaseContext.Pool of <multiprocessing.context.DefaultContext object at 0x7f8ef585d0a0>>, NoneType] = Nonenum_processes: typing.Optional[int] = Nonebeam_width: typing.Optional[int] = Nonebeam_prune_logp: typing.Optional[float] = Nonetoken_min_logp: typing.Optional[float] = Nonehotwords: typing.Optional[typing.Iterable[str]] = Nonehotword_weight: typing.Optional[float] = Nonealpha: typing.Optional[float] = Nonebeta: typing.Optional[float] = Noneunk_score_offset: typing.Optional[float] = Nonelm_score_boundary: typing.Optional[bool] = Noneoutput_word_offsets: bool = Falsen_best: int = 1 )
Parameters
logits (np.ndarray
) — The logits output vector of the model representing the log probabilities for each token.
pool (multiprocessing.Pool
, optional) — An optional user-managed pool. If not set, one will be automatically created and closed. The pool should be instantiated after Wav2Vec2ProcessorWithLM
. Otherwise, the LM won’t be available to the pool’s sub-processes.
Currently, only pools created with a ‘fork’ context can be used. If a ‘spawn’ pool is passed, it will be ignored and sequential decoding will be used instead.
num_processes (int
, optional) — If pool
is not set, number of processes on which the function should be parallelized over. Defaults to the number of available CPUs.
beam_width (int
, optional) — Maximum number of beams at each step in decoding. Defaults to pyctcdecode’s DEFAULT_BEAM_WIDTH.
beam_prune_logp (int
, optional) — Beams that are much worse than best beam will be pruned Defaults to pyctcdecode’s DEFAULT_PRUNE_LOGP.
token_min_logp (int
, optional) — Tokens below this logp are skipped unless they are argmax of frame Defaults to pyctcdecode’s DEFAULT_MIN_TOKEN_LOGP.
hotwords (List[str]
, optional) — List of words with extra importance, can be OOV for LM
hotword_weight (int
, optional) — Weight factor for hotword importance Defaults to pyctcdecode’s DEFAULT_HOTWORD_WEIGHT.
alpha (float
, optional) — Weight for language model during shallow fusion
beta (float
, optional) — Weight for length score adjustment of during scoring
unk_score_offset (float
, optional) — Amount of log score offset for unknown tokens
lm_score_boundary (bool
, optional) — Whether to have kenlm respect boundaries when scoring
output_word_offsets (bool
, optional, defaults to False
) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.
n_best (int
, optional, defaults to 1
) — Number of best hypotheses to return. If n_best
is greater than 1, the returned text
will be a list of lists of strings, logit_score
will be a list of lists of floats, and lm_score
will be a list of lists of floats, where the length of the outer list will correspond to the batch size and the length of the inner list will correspond to the number of returned hypotheses . The value should be >= 1.
Batch decode output logits to audio transcription with language model support.
If you are decoding multiple batches, consider creating a Pool
and passing it to batch_decode
. Otherwise, batch_decode
will be very slow since it will create a fresh Pool
for each call. See usage example below.
decode
( logits: ndarraybeam_width: typing.Optional[int] = Nonebeam_prune_logp: typing.Optional[float] = Nonetoken_min_logp: typing.Optional[float] = Nonehotwords: typing.Optional[typing.Iterable[str]] = Nonehotword_weight: typing.Optional[float] = Nonealpha: typing.Optional[float] = Nonebeta: typing.Optional[float] = Noneunk_score_offset: typing.Optional[float] = Nonelm_score_boundary: typing.Optional[bool] = Noneoutput_word_offsets: bool = Falsen_best: int = 1 )
Parameters
logits (np.ndarray
) — The logits output vector of the model representing the log probabilities for each token.
beam_width (int
, optional) — Maximum number of beams at each step in decoding. Defaults to pyctcdecode’s DEFAULT_BEAM_WIDTH.
beam_prune_logp (int
, optional) — A threshold to prune beams with log-probs less than best_beam_logp + beam_prune_logp. The value should be <= 0. Defaults to pyctcdecode’s DEFAULT_PRUNE_LOGP.
token_min_logp (int
, optional) — Tokens with log-probs below token_min_logp are skipped unless they are have the maximum log-prob for an utterance. Defaults to pyctcdecode’s DEFAULT_MIN_TOKEN_LOGP.
hotwords (List[str]
, optional) — List of words with extra importance which can be missing from the LM’s vocabulary, e.g. [“boincai”]
hotword_weight (int
, optional) — Weight multiplier that boosts hotword scores. Defaults to pyctcdecode’s DEFAULT_HOTWORD_WEIGHT.
alpha (float
, optional) — Weight for language model during shallow fusion
beta (float
, optional) — Weight for length score adjustment of during scoring
unk_score_offset (float
, optional) — Amount of log score offset for unknown tokens
lm_score_boundary (bool
, optional) — Whether to have kenlm respect boundaries when scoring
output_word_offsets (bool
, optional, defaults to False
) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.
n_best (int
, optional, defaults to 1
) — Number of best hypotheses to return. If n_best
is greater than 1, the returned text
will be a list of strings, logit_score
will be a list of floats, and lm_score
will be a list of floats, where the length of these lists will correspond to the number of returned hypotheses. The value should be >= 1.
Please take a look at the example below to better understand how to make use of output_word_offsets
.
Decode output logits to audio transcription with language model support.
Example:
Copied
Copied
( text: typing.Union[typing.List[typing.List[str]], typing.List[str], str]logit_score: typing.Union[typing.List[typing.List[float]], typing.List[float], float] = Nonelm_score: typing.Union[typing.List[typing.List[float]], typing.List[float], float] = Noneword_offsets: typing.Union[typing.List[typing.List[typing.List[typing.Dict[str, typing.Union[int, str]]]]], typing.List[typing.List[typing.Dict[str, typing.Union[int, str]]]], typing.List[typing.Dict[str, typing.Union[int, str]]]] = None )
Parameters
text (list of str
or str
) — Decoded logits in text from. Usually the speech transcription.
logit_score (list of float
or float
) — Total logit score of the beams associated with produced text.
lm_score (list of float
) — Fused lm_score of the beams associated with produced text.
word_offsets (list of List[Dict[str, Union[int, str]]]
or List[Dict[str, Union[int, str]]]
) — Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets can be used to compute time stamps for each word.
Output type of Wav2Vec2DecoderWithLM
, with transcription.
( last_hidden_state: FloatTensor = Noneextract_features: FloatTensor = Nonehidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = Noneattentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor
of shape (batch_size, sequence_length, conv_dim[-1])
) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Base class for models that have been trained with the Wav2Vec2 loss objective.
( loss: typing.Optional[torch.FloatTensor] = Noneprojected_states: FloatTensor = Noneprojected_quantized_states: FloatTensor = Nonecodevector_perplexity: FloatTensor = Nonehidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = Noneattentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = Nonecontrastive_loss: typing.Optional[torch.FloatTensor] = Nonediversity_loss: typing.Optional[torch.FloatTensor] = None )
Parameters
projected_states (torch.FloatTensor
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.
projected_quantized_states (torch.FloatTensor
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
( last_hidden_state: Array = Noneextract_features: Array = Nonehidden_states: typing.Optional[typing.Tuple[jax.Array]] = Noneattentions: typing.Optional[typing.Tuple[jax.Array]] = None )
Parameters
last_hidden_state (jnp.ndarray
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (jnp.ndarray
of shape (batch_size, sequence_length, last_conv_dim)
) — Sequence of extracted feature vectors of the last convolutional layer of the model with last_conv_dim
being the dimension of the last convolutional layer.
hidden_states (tuple(jnp.ndarray)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of FlaxWav2Vec2BaseModelOutput
, with potential hidden states and attentions.
replace
( **updates )
“Returns a new object replacing the specified fields with new values.
( projected_states: Array = Noneprojected_quantized_states: Array = Nonecodevector_perplexity: Array = Nonehidden_states: typing.Optional[typing.Tuple[jax.Array]] = Noneattentions: typing.Optional[typing.Tuple[jax.Array]] = None )
Parameters
projected_states (jnp.ndarray
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.
projected_quantized_states (jnp.ndarray
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.
hidden_states (tuple(jnp.ndarray)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of FlaxWav2Vec2ForPreTrainingOutput
, with potential hidden states and attentions.
replace
( **updates )
“Returns a new object replacing the specified fields with new values.
( config: Wav2Vec2Config )
Parameters
forward
Parameters
attention_mask (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor
of shape (batch_size, sequence_length, conv_dim[-1])
) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( configtarget_lang: typing.Optional[str] = None )
Parameters
forward
Parameters
attention_mask (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
labels (torch.LongTensor
of shape (batch_size, target_length)
, optional) — Labels for connectionist temporal classification. Note that target_length
has to be smaller or equal to the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1]
. All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1]
.
Returns
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
load_adapter
( target_lang: strforce_load = True**kwargs )
Parameters
target_lang (str
) — Has to be a language id of an existing adapter weight. Adapter weights are stored in the format adapter..safetensors or adapter..bin
force_load (bool
, defaults to True
) — Whether the weights shall be loaded even if target_lang
matches self.target_lang
.
cache_dir (Union[str, os.PathLike]
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
force_download (bool
, optional, defaults to False
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
resume_download (bool
, optional, defaults to False
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.
proxies (Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
local_files_only(bool
, optional, defaults to False
) — Whether or not to only look at local files (i.e., do not try to download the model).
token (str
or bool
, optional) — The token to use as HTTP bearer authorization for remote files. If True
, or not specified, will use the token generated when running boincai-cli login
(stored in ~/.boincai
).
revision (str
, optional, defaults to "main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on boincai.com, so revision
can be any identifier allowed by git.
To test a pull request you made on the Hub, you can pass `revision=“refs/pr/“.
mirror (str
, optional) — Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information.
Load a language adapter model from a pre-trained adapter model.
Examples:
Copied
( config )
Parameters
Wav2Vec2 Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.
forward
Parameters
attention_mask (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
labels (torch.LongTensor
of shape (batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss), If config.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor
of shape (batch_size, config.num_labels)
) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config )
Parameters
Wav2Vec2 Model with a frame classification head on top for tasks like Speaker Diarization.
forward
Parameters
attention_mask (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
labels (torch.LongTensor
of shape (batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss), If config.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification loss.
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.num_labels)
) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config )
Parameters
Wav2Vec2 Model with an XVector feature extraction head on top for tasks like Speaker Verification.
forward
Parameters
attention_mask (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
labels (torch.LongTensor
of shape (batch_size,)
, optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss), If config.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification loss.
logits (torch.FloatTensor
of shape (batch_size, config.xvector_output_dim)
) — Classification hidden states before AMSoftmax.
embeddings (torch.FloatTensor
of shape (batch_size, config.xvector_output_dim)
) — Utterance embeddings used for vector similarity-based retrieval.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config: Wav2Vec2Config )
Parameters
forward
Parameters
attention_mask (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
mask_time_indices (torch.BoolTensor
of shape (batch_size, sequence_length)
, optional) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.
sampled_negative_indices (torch.BoolTensor
of shape (batch_size, sequence_length, num_negatives)
, optional) — Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss. Required input for pre-training.
Returns
projected_states (torch.FloatTensor
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.
projected_quantized_states (torch.FloatTensor
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( *args**kwargs )
Parameters
The bare TFWav2Vec2 Model transformer outputing raw hidden-states without any specific head on top.
TensorFlow models and layers in transformers
accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit()
things should “just work” for you - just pass your inputs and labels in any format that model.fit()
supports! If, however, you want to use the second format outside of Keras methods like fit()
and predict()
, such as when creating your own layers or models with the Keras Functional
API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_values
only and nothing else: model(input_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_values, attention_mask])
or model([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_values": input_values, "token_type_ids": token_type_ids})
call
Parameters
input_values (np.ndarray
, tf.Tensor
, List[tf.Tensor]
Dict[str, tf.Tensor]
or Dict[str, np.ndarray]
and each example must have the shape ({0})
) — Indices of input sequence tokens in the vocabulary.
attention_mask (np.ndarray
or tf.Tensor
of shape ({0})
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (np.ndarray
or tf.Tensor
of shape ({0})
, optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (np.ndarray
or tf.Tensor
of shape ({0})
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]
.
head_mask (np.ndarray
or tf.Tensor
of shape (num_heads,)
or (num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray
or tf.Tensor
of shape ({0}, hidden_size)
, optional) — Optionally, instead of passing input_values
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_values
indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
training (bool
, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
Returns
last_hidden_state (tf.Tensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of tf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of tf.Tensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( *args**kwargs )
call
( input_values: tf.Tensorattention_mask: tf.Tensor | None = Noneoutput_attentions: bool | None = Noneoutput_hidden_states: bool | None = Nonereturn_dict: bool | None = Nonelabels: tf.Tensor | None = Nonetraining: bool = False )
( *args**kwargs )
Parameters
TFWav2Vec2 Model with a language modeling
head on top for Connectionist Temporal Classification (CTC).
TensorFlow models and layers in transformers
accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit()
things should “just work” for you - just pass your inputs and labels in any format that model.fit()
supports! If, however, you want to use the second format outside of Keras methods like fit()
and predict()
, such as when creating your own layers or models with the Keras Functional
API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_values
only and nothing else: model(input_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_values, attention_mask])
or model([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_values": input_values, "token_type_ids": token_type_ids})
call
Parameters
input_values (np.ndarray
, tf.Tensor
, List[tf.Tensor]
Dict[str, tf.Tensor]
or Dict[str, np.ndarray]
and each example must have the shape ({0})
) — Indices of input sequence tokens in the vocabulary.
attention_mask (np.ndarray
or tf.Tensor
of shape ({0})
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (np.ndarray
or tf.Tensor
of shape ({0})
, optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (np.ndarray
or tf.Tensor
of shape ({0})
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]
.
head_mask (np.ndarray
or tf.Tensor
of shape (num_heads,)
or (num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray
or tf.Tensor
of shape ({0}, hidden_size)
, optional) — Optionally, instead of passing input_values
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_values
indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
training (bool
, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor
or np.ndarray
of shape (batch_size, sequence_length)
, optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size]
(see input_values
docstring) Tokens with indices set to -100
are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
loss (tf.Tensor
of shape (n,)
, optional, where n is the number of non-masked labels, returned when labels
is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of tf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of tf.Tensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config: Wav2Vec2Configinput_shape: typing.Tuple = (1, 1024)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>_do_init: bool = True**kwargs )
Parameters
dtype (jax.numpy.dtype
, optional, defaults to jax.numpy.float32
) — The data type of the computation. Can be one of jax.numpy.float32
, jax.numpy.float16
(on GPUs) and jax.numpy.bfloat16
(on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype
.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
Finally, this model supports inherent JAX features such as:
__call__
Parameters
attention_mask (jnp.ndarray
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
mask_time_indices (jnp.ndarray
of shape (batch_size, sequence_length)
, optional) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
last_hidden_state (jnp.ndarray
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (jnp.ndarray
of shape (batch_size, sequence_length, last_conv_dim)
) — Sequence of extracted feature vectors of the last convolutional layer of the model with last_conv_dim
being the dimension of the last convolutional layer.
hidden_states (tuple(jnp.ndarray)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxWav2Vec2PreTrainedModel
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config: Wav2Vec2Configinput_shape: typing.Tuple = (1, 1024)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>_do_init: bool = True**kwargs )
Parameters
dtype (jax.numpy.dtype
, optional, defaults to jax.numpy.float32
) — The data type of the computation. Can be one of jax.numpy.float32
, jax.numpy.float16
(on GPUs) and jax.numpy.bfloat16
(on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype
.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
Finally, this model supports inherent JAX features such as:
__call__
Parameters
attention_mask (jnp.ndarray
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
mask_time_indices (jnp.ndarray
of shape (batch_size, sequence_length)
, optional) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
logits (jnp.ndarray
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxWav2Vec2PreTrainedModel
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config: Wav2Vec2Configinput_shape: typing.Tuple = (1, 1024)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>_do_init: bool = True**kwargs )
Parameters
dtype (jax.numpy.dtype
, optional, defaults to jax.numpy.float32
) — The data type of the computation. Can be one of jax.numpy.float32
, jax.numpy.float16
(on GPUs) and jax.numpy.bfloat16
(on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype
.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
Finally, this model supports inherent JAX features such as:
__call__
Parameters
attention_mask (jnp.ndarray
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
mask_time_indices (jnp.ndarray
of shape (batch_size, sequence_length)
, optional) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
projected_states (jnp.ndarray
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.
projected_quantized_states (jnp.ndarray
of shape (batch_size, sequence_length, config.proj_codevector_dim)
) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.
hidden_states (tuple(jnp.ndarray)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
A blog post on 🌎.
A blog post on how to 🌎.
A blog post on 🌎.
A notebook on how to . 🌎
is supported by a notebook on , and .
A blog post on how to deploy Wav2Vec2 for .
vocab_size (int
, optional, defaults to 32) — Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by the inputs_ids
passed when calling or . Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of .
final_dropout (float
, optional, defaults to 0.1) — The dropout probability for the final projection layer of .
layerdrop (float
, optional, defaults to 0.1) — The LayerDrop probability. See the [LayerDrop paper](see ) for more details.
apply_spec_augment (bool
, optional, defaults to True
) — Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see .
ctc_loss_reduction (str
, optional, defaults to "sum"
) — Specifies the reduction to apply to the output of torch.nn.CTCLoss
. Only relevant when training an instance of .
ctc_zero_infinity (bool
, optional, defaults to False
) — Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss
. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of .
use_weighted_layer_sum (bool
, optional, defaults to False
) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of .
adapter_attn_dim (int
, optional) — Dimension of the attention adapter weights to be used in each attention block. An example of a model using attention adapters is .
This is the configuration class to store the configuration of a . It is used to instantiate an Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2 architecture.
Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.
target_lang (str
, optional) — A target language the tokenizer should set by default. target_lang
has to be defined for multi-lingual, nested vocabulary such as .
**kwargs — Additional keyword arguments passed along to
This tokenizer inherits from which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.
( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = Nonetext_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = Nonetext_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = Nonetext_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = Noneadd_special_tokens: bool = Truepadding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = Falsetruncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = Nonemax_length: typing.Optional[int] = Nonestride: int = 0is_split_into_words: bool = Falsepad_to_multiple_of: typing.Optional[int] = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonereturn_token_type_ids: typing.Optional[bool] = Nonereturn_attention_mask: typing.Optional[bool] = Nonereturn_overflowing_tokens: bool = Falsereturn_special_tokens_mask: bool = Falsereturn_offsets_mapping: bool = Falsereturn_length: bool = Falseverbose: bool = True**kwargs ) →
padding (bool
, str
or , optional, defaults to False
) — Activates and controls padding. Accepts the following values:
truncation (bool
, str
or , optional, defaults to False
) — Activates and controls truncation. Accepts the following values:
return_tensors (str
or , optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
This is only available on fast tokenizers inheriting from , if using Python’s tokenizer, this method will raise NotImplementedError
.
A with the following fields:
Please take a look at the Example of to better understand how to make use of output_char_offsets
. works the same way with batched output.
Please take a look at the Example of to better understand how to make use of output_word_offsets
. works the same way with batched output.
do_normalize (bool
, optional, defaults to True
) — Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models, e.g., .
return_attention_mask (bool
, optional, defaults to False
) — Whether or not should return attention_mask
.
Wav2Vec2 models that have set config.feat_extract_norm == "group"
, such as , have not been trained using attention_mask
. For such models, input_values
should simply be padded with 0 and no attention_mask
should be passed.
For Wav2Vec2 models that have set config.feat_extract_norm == "layer"
, such as , attention_mask
should be passed for batched inference.
This feature extractor inherits from which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
padding (bool
, str
or , optional, defaults to False
) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:
Wav2Vec2 models that have set config.feat_extract_norm == "group"
, such as , have not been trained using attention_mask
. For such models, input_values
should simply be padded with 0 and no attention_mask
should be passed.
For Wav2Vec2 models that have set config.feat_extract_norm == "layer"
, such as , attention_mask
should be passed for batched inference.
return_tensors (str
or , optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
feature_extractor (Wav2Vec2FeatureExtractor
) — An instance of . The feature extractor is a required input.
tokenizer () — An instance of . The tokenizer is a required input.
offers all the functionalities of and . See the docstring of and for more information.
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s and returns its output. If used in the context as_target_processor()
this method forwards all its arguments to PreTrainedTokenizer’s . Please refer to the docstring of the above two methods for more information.
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s and returns its output. If used in the context as_target_processor()
this method forwards all its arguments to PreTrainedTokenizer’s . Please refer to the docstring of the above two methods for more information.
kwargs (Dict[str, Any]
, optional) — Additional key word arguments passed along to the method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the method.
This class method is simply calling and . Please refer to the docstrings of the methods above for more information.
This method forwards all its arguments to PreTrainedTokenizer’s . Please refer to the docstring of this method for more information.
This method forwards all its arguments to PreTrainedTokenizer’s . Please refer to the docstring of this method for more information.
feature_extractor () — An instance of . The feature extractor is a required input.
tokenizer () — An instance of . The tokenizer is a required input.
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s and returns its output. If used in the context as_target_processor()
this method forwards all its arguments to Wav2Vec2CTCTokenizer’s . Please refer to the docstring of the above two methods for more information.
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s and returns its output. If used in the context as_target_processor()
this method forwards all its arguments to Wav2Vec2CTCTokenizer’s . Please refer to the docstring of the above two methods for more information.
a path to a directory containing a feature extractor file saved using the method, e.g., ./my_model_directory/
.
a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json
. **kwargs — Additional keyword arguments passed along to both and
Instantiate a from a pretrained Wav2Vec2 processor.
This class method is simply calling Wav2Vec2FeatureExtractor’s , Wav2Vec2CTCTokenizer’s , and pyctcdecode.BeamSearchDecoderCTC.load_from_hf_hub
.
Please take a look at the Example of to better understand how to make use of output_word_offsets
. works the same way with batched output.
This function makes use of Python’s multiprocessing. Currently, multiprocessing is available only on Unix systems (see this ).
Example: See .
If you are planning to decode multiple batches of audios, you should consider using and passing an instantiated multiprocessing.Pool
. Otherwise, performance will be slower than calling for each audio individually, as it internally instantiates a new Pool
for every call. See the example below:
loss (optional, returned when sample_negative_indices
are passed, torch.FloatTensor
of shape (1,)
) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the . (classification) loss.
contrastive_loss (optional, returned when sample_negative_indices
are passed, torch.FloatTensor
of shape (1,)
) — The contrastive loss (L_m) as stated in the .
diversity_loss (optional, returned when sample_negative_indices
are passed, torch.FloatTensor
of shape (1,)
) — The diversity loss (L_d) as stated in the .
Output type of , with potential hidden states and attentions.
loss (optional, returned when model is in train mode, jnp.ndarray
of shape (1,)
) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the . (classification) loss.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Nonemask_time_indices: typing.Optional[torch.FloatTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
input_values (torch.FloatTensor
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type torch.FloatTensor
. See for details.
attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
Wav2Vec2 Model with a language modeling
head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonelabels: typing.Optional[torch.Tensor] = None ) → or tuple(torch.FloatTensor)
input_values (torch.FloatTensor
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type torch.FloatTensor
. See for details.
attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
Activate the special to use this method in a firewalled environment.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonelabels: typing.Optional[torch.Tensor] = None ) → or tuple(torch.FloatTensor)
input_values (torch.FloatTensor
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type torch.FloatTensor
. See for details.
attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Nonelabels: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
input_values (torch.FloatTensor
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type torch.FloatTensor
. See for details.
attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonelabels: typing.Optional[torch.Tensor] = None ) → or tuple(torch.FloatTensor)
input_values (torch.FloatTensor
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type torch.FloatTensor
. See for details.
attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
Wav2Vec2 Model with a quantizer and VQ
head on top. Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Nonemask_time_indices: typing.Optional[torch.BoolTensor] = Nonesampled_negative_indices: typing.Optional[torch.BoolTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
input_values (torch.FloatTensor
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type torch.FloatTensor
. See for details.
attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
loss (optional, returned when sample_negative_indices
are passed, torch.FloatTensor
of shape (1,)
) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the . (classification) loss.
contrastive_loss (optional, returned when sample_negative_indices
are passed, torch.FloatTensor
of shape (1,)
) — The contrastive loss (L_m) as stated in the .
diversity_loss (optional, returned when sample_negative_indices
are passed, torch.FloatTensor
of shape (1,)
) — The diversity loss (L_d) as stated in the .
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note that when creating models and layers with then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
( input_values: tf.Tensorattention_mask: tf.Tensor | None = Nonetoken_type_ids: tf.Tensor | None = Noneposition_ids: tf.Tensor | None = Nonehead_mask: tf.Tensor | None = Noneinputs_embeds: tf.Tensor | None = Noneoutput_attentions: Optional[bool] = Noneoutput_hidden_states: Optional[bool] = Nonereturn_dict: Optional[bool] = Nonetraining: bool = False ) → or tuple(tf.Tensor)
Indices can be obtained using . See and for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
or tuple(tf.Tensor)
A or a tuple of tf.Tensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note that when creating models and layers with then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
( input_values: tf.Tensorattention_mask: tf.Tensor | None = Nonetoken_type_ids: tf.Tensor | None = Noneposition_ids: tf.Tensor | None = Nonehead_mask: tf.Tensor | None = Noneinputs_embeds: tf.Tensor | None = Noneoutput_attentions: Optional[bool] = Nonelabels: tf.Tensor | None = Noneoutput_hidden_states: Optional[bool] = Nonereturn_dict: Optional[bool] = Nonetraining: Optional[bool] = False ) → or tuple(tf.Tensor)
Indices can be obtained using . See and for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
or tuple(tf.Tensor)
A or a tuple of tf.Tensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
If you wish to change the dtype of the model parameters, see and .
The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
( input_valuesattention_mask = Nonemask_time_indices = Noneparams: dict = Nonedropout_rng: PRNGKey = Nonetrain: bool = Falseoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonefreeze_feature_encoder: bool = Falsereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
input_values (jnp.ndarray
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type jnp.ndarray
. See for details.
.. warning:: attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>
) and inputs.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
If you wish to change the dtype of the model parameters, see and .
Wav2Vec2 Model with a language modeling
head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
( input_valuesattention_mask = Nonemask_time_indices = Noneparams: dict = Nonedropout_rng: PRNGKey = Nonetrain: bool = Falseoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonefreeze_feature_encoder: bool = Falsereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
input_values (jnp.ndarray
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type jnp.ndarray
. See for details.
.. warning:: attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>
) and inputs.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
If you wish to change the dtype of the model parameters, see and .
Wav2Vec2 Model with a quantizer and VQ
head on top. Wav2Vec2 was proposed in by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
( input_valuesattention_mask = Nonemask_time_indices = Nonegumbel_temperature: int = 1params: dict = Nonedropout_rng: PRNGKey = Nonegumbel_rng: PRNGKey = Nonetrain: bool = Falseoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonefreeze_feature_encoder: bool = Falsereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
input_values (jnp.ndarray
of shape (batch_size, sequence_length)
) — Float values of input raw speech waveform. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile library (pip install soundfile
). To prepare the array into input_values
, the should be used for padding and conversion into a tensor of type jnp.ndarray
. See for details.
.. warning:: attention_mask
should only be passed if the corresponding processor has config.return_attention_mask == True
. For all models whose processor has config.return_attention_mask == False
, such as , attention_mask
should not be passed to avoid degraded performance when doing batched inference. For such models input_values
should simply be padded with 0 and passed without attention_mask
. Be aware that these models also yield slightly different results depending on whether input_values
is padded or not.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>
) and inputs.
loss (optional, returned when model is in train mode, jnp.ndarray
of shape (1,)
) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the . (classification) loss.
The forward method, overrides the __call__
special method.