UniSpeech

UniSpeech

Overview

The UniSpeech model was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Dataarrow-up-right by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang .

The abstract from the paper is the following:

In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.

Tips:

  • UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use Wav2Vec2Processorarrow-up-right for the feature extraction.

  • UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using Wav2Vec2CTCTokenizerarrow-up-right.

This model was contributed by patrickvonplatenarrow-up-right. The Authors’ code can be found herearrow-up-right.

Documentation resources

UniSpeechConfig

class transformers.UniSpeechConfig

<source>arrow-up-right

( vocab_size = 32hidden_size = 768num_hidden_layers = 12num_attention_heads = 12intermediate_size = 3072hidden_act = 'gelu'hidden_dropout = 0.1activation_dropout = 0.1attention_dropout = 0.1feat_proj_dropout = 0.0feat_quantizer_dropout = 0.0final_dropout = 0.1layerdrop = 0.1initializer_range = 0.02layer_norm_eps = 1e-05feat_extract_norm = 'group'feat_extract_activation = 'gelu'conv_dim = (512, 512, 512, 512, 512, 512, 512)conv_stride = (5, 2, 2, 2, 2, 2, 2)conv_kernel = (10, 3, 3, 3, 3, 2, 2)conv_bias = Falsenum_conv_pos_embeddings = 128num_conv_pos_embedding_groups = 16do_stable_layer_norm = Falseapply_spec_augment = Truemask_time_prob = 0.05mask_time_length = 10mask_time_min_masks = 2mask_feature_prob = 0.0mask_feature_length = 10mask_feature_min_masks = 0num_codevectors_per_group = 320num_codevector_groups = 2contrastive_logits_temperature = 0.1num_negatives = 100codevector_dim = 256proj_codevector_dim = 256diversity_loss_weight = 0.1ctc_loss_reduction = 'mean'ctc_zero_infinity = Falseuse_weighted_layer_sum = Falseclassifier_proj_size = 256num_ctc_classes = 80pad_token_id = 0bos_token_id = 1eos_token_id = 2replace_prob = 0.5**kwargs )

Parameters

  • vocab_size (int, optional, defaults to 32) β€” Vocabulary size of the UniSpeech model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling UniSpeechModelarrow-up-right. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of UniSpeechModelarrow-up-right.

  • hidden_size (int, optional, defaults to 768) β€” Dimensionality of the encoder layers and the pooler layer.

  • num_hidden_layers (int, optional, defaults to 12) β€” Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) β€” Number of attention heads for each attention layer in the Transformer encoder.

  • intermediate_size (int, optional, defaults to 3072) β€” Dimensionality of the β€œintermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • hidden_act (str or function, optional, defaults to "gelu") β€” The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • hidden_dropout (float, optional, defaults to 0.1) β€” The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

  • activation_dropout (float, optional, defaults to 0.1) β€” The dropout ratio for activations inside the fully connected layer.

  • attention_dropout (float, optional, defaults to 0.1) β€” The dropout ratio for the attention probabilities.

  • final_dropout (float, optional, defaults to 0.1) β€” The dropout probability for the final projection layer of UniSpeechForCTCarrow-up-right.

  • layerdrop (float, optional, defaults to 0.1) β€” The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556arrow-up-right) for more details.

  • initializer_range (float, optional, defaults to 0.02) β€” The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) β€” The epsilon used by the layer normalization layers.

  • feat_extract_norm (str, optional, defaults to "group") β€” The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D convolutional layers.

  • feat_proj_dropout (float, optional, defaults to 0.0) β€” The dropout probability for output of the feature encoder.

  • feat_extract_activation (str, optional, defaults to β€œgelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, β€œgelu”, β€œrelu”, β€œselu”andβ€œgelu_new”` are supported.

  • feat_quantizer_dropout (float, optional, defaults to 0.0) β€” The dropout probabilitiy for quantized feature encoder states.

  • conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) β€” A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of conv_dim defines the number of 1D convolutional layers.

  • conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) β€” A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.

  • conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) β€” A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of conv_kernel defines the number of convolutional layers and has to match the length of conv_dim.

  • conv_bias (bool, optional, defaults to False) β€” Whether the 1D convolutional layers have a bias.

  • num_conv_pos_embeddings (int, optional, defaults to 128) β€” Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.

  • num_conv_pos_embedding_groups (int, optional, defaults to 16) β€” Number of groups of 1D convolutional positional embeddings layer.

  • do_stable_layer_norm (bool, optional, defaults to False) β€” Whether to apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.

  • apply_spec_augment (bool, optional, defaults to True) β€” Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognitionarrow-up-right.

  • mask_time_prob (float, optional, defaults to 0.05) β€” Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.

  • mask_time_length (int, optional, defaults to 10) β€” Length of vector span along the time axis.

  • mask_time_min_masks (int, optional, defaults to 2), β€” The minimum number of masks of length mask_feature_length generated along the time axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks”

  • mask_feature_prob (float, optional, defaults to 0.0) β€” Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.

  • mask_feature_length (int, optional, defaults to 10) β€” Length of vector span along the feature axis.

  • mask_feature_min_masks (int, optional, defaults to 0), β€” The minimum number of masks of length mask_feature_length generated along the feature axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”

  • num_codevectors_per_group (int, optional, defaults to 320) β€” Number of entries in each quantization codebook (group).

  • num_codevector_groups (int, optional, defaults to 2) β€” Number of codevector groups for product codevector quantization.

  • contrastive_logits_temperature (float, optional, defaults to 0.1) β€” The temperature kappa in the contrastive loss.

  • feat_quantizer_dropout (float, optional, defaults to 0.0) β€” The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.

  • num_negatives (int, optional, defaults to 100) β€” Number of negative samples for the contrastive loss.

  • codevector_dim (int, optional, defaults to 256) β€” Dimensionality of the quantized feature vectors.

  • proj_codevector_dim (int, optional, defaults to 256) β€” Dimensionality of the final projection of both the quantized and the transformer features.

  • diversity_loss_weight (int, optional, defaults to 0.1) β€” The weight of the codebook diversity loss component.

  • ctc_loss_reduction (str, optional, defaults to "mean") β€” Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an instance of UniSpeechForCTCarrow-up-right.

  • ctc_zero_infinity (bool, optional, defaults to False) β€” Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of UniSpeechForCTCarrow-up-right.

  • use_weighted_layer_sum (bool, optional, defaults to False) β€” Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of UniSpeechForSequenceClassificationarrow-up-right.

  • classifier_proj_size (int, optional, defaults to 256) β€” Dimensionality of the projection before token mean-pooling for classification.

  • replace_prob (float, optional, defaults to 0.5) β€” Propability that transformer feature is replaced by quantized feature for pretraining.

This is the configuration class to store the configuration of a UniSpeechModelarrow-up-right. It is used to instantiate an UniSpeech model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the UniSpeech microsoft/unispeech-large-1500h-cvarrow-up-right architecture.

Configuration objects inherit from PretrainedConfigarrow-up-right and can be used to control the model outputs. Read the documentation from PretrainedConfigarrow-up-right for more information.

Example:

Copied

UniSpeech specific outputs

class transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput

<source>arrow-up-right

( loss: typing.Optional[torch.FloatTensor] = Noneprojected_states: FloatTensor = Noneprojected_quantized_states: FloatTensor = Nonecodevector_perplexity: FloatTensor = Nonehidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = Noneattentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )

Parameters

  • loss (optional, returned when model is in train mode, torch.FloatTensor of shape (1,)) β€” Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paperarrow-up-right . (classification) loss.

  • projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) β€” Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.

  • projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) β€” Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Output type of UniSpeechForPreTrainingOutput, with potential hidden states and attentions.

UniSpeechModel

class transformers.UniSpeechModel

<source>arrow-up-right

( config: UniSpeechConfig )

Parameters

The bare UniSpeech Model transformer outputting raw hidden-states without any specific head on top. UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Dataarrow-up-right by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.

This model inherits from PreTrainedModelarrow-up-right. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Modulearrow-up-right sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

<source>arrow-up-right

( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Nonemask_time_indices: typing.Optional[torch.FloatTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) β†’ transformers.modeling_outputs.Wav2Vec2BaseModelOutputarrow-up-right or tuple(torch.FloatTensor)

Parameters

  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) β€” Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessorarrow-up-right should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call()arrow-up-right for details.

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) β€” Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?arrow-up-right

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • output_attentions (bool, optional) β€” Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) β€” Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) β€” Whether or not to return a ModelOutputarrow-up-right instead of a plain tuple.

Returns

transformers.modeling_outputs.Wav2Vec2BaseModelOutputarrow-up-right or tuple(torch.FloatTensor)

A transformers.modeling_outputs.Wav2Vec2BaseModelOutputarrow-up-right or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (UniSpeechConfigarrow-up-right) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the model.

  • extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) β€” Sequence of extracted feature vectors of the last convolutional layer of the model.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The UniSpeechModelarrow-up-right forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

UniSpeechForCTC

class transformers.UniSpeechForCTC

<source>arrow-up-right

( configtarget_lang: typing.Optional[str] = None )

Parameters

UniSpeech Model with a language modeling head on top for Connectionist Temporal Classification (CTC). UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Dataarrow-up-right by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.

This model inherits from PreTrainedModelarrow-up-right. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Modulearrow-up-right sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

<source>arrow-up-right

( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonelabels: typing.Optional[torch.Tensor] = None ) β†’ transformers.modeling_outputs.CausalLMOutputarrow-up-right or tuple(torch.FloatTensor)

Parameters

  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) β€” Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessorarrow-up-right should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call()arrow-up-right for details.

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) β€” Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?arrow-up-right

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • output_attentions (bool, optional) β€” Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) β€” Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) β€” Whether or not to return a ModelOutputarrow-up-right instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size, target_length), optional) β€” Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].

Returns

transformers.modeling_outputs.CausalLMOutputarrow-up-right or tuple(torch.FloatTensor)

A transformers.modeling_outputs.CausalLMOutputarrow-up-right or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (UniSpeechConfigarrow-up-right) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) β€” Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) β€” Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The UniSpeechForCTCarrow-up-right forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

UniSpeechForSequenceClassification

class transformers.UniSpeechForSequenceClassification

<source>arrow-up-right

( config )

Parameters

UniSpeech Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.

UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Dataarrow-up-right by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.

This model inherits from PreTrainedModelarrow-up-right. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Modulearrow-up-right sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

<source>arrow-up-right

( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonelabels: typing.Optional[torch.Tensor] = None ) β†’ transformers.modeling_outputs.SequenceClassifierOutputarrow-up-right or tuple(torch.FloatTensor)

Parameters

  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) β€” Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessorarrow-up-right should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call()arrow-up-right for details.

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) β€” Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?arrow-up-right

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • output_attentions (bool, optional) β€” Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) β€” Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) β€” Whether or not to return a ModelOutputarrow-up-right instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size,), optional) β€” Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

transformers.modeling_outputs.SequenceClassifierOutputarrow-up-right or tuple(torch.FloatTensor)

A transformers.modeling_outputs.SequenceClassifierOutputarrow-up-right or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (UniSpeechConfigarrow-up-right) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) β€” Classification (or regression if config.num_labels==1) loss.

  • logits (torch.FloatTensor of shape (batch_size, config.num_labels)) β€” Classification (or regression if config.num_labels==1) scores (before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The UniSpeechForSequenceClassificationarrow-up-right forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

UniSpeechForPreTraining

class transformers.UniSpeechForPreTraining

<source>arrow-up-right

( config: UniSpeechConfig )

Parameters

UniSpeech Model with a vector-quantization module and ctc loss for pre-training. UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Dataarrow-up-right by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.

This model inherits from PreTrainedModelarrow-up-right. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).

This model is a PyTorch torch.nn.Modulearrow-up-right sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

<source>arrow-up-right

( input_values: typing.Optional[torch.Tensor]attention_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) β†’ transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutputarrow-up-right or tuple(torch.FloatTensor)

Parameters

  • input_values (torch.FloatTensor of shape (batch_size, sequence_length)) β€” Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessorarrow-up-right should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call()arrow-up-right for details.

  • attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) β€” Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

    • 0 for tokens that are masked.

    What are attention masks?arrow-up-right

    attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not.

  • output_attentions (bool, optional) β€” Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) β€” Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) β€” Whether or not to return a ModelOutputarrow-up-right instead of a plain tuple.

  • mask_time_indices (torch.BoolTensor of shape (batch_size, sequence_length), optional) β€” Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space.

  • sampled_negative_indices (torch.BoolTensor of shape (batch_size, sequence_length, num_negatives), optional) β€” Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss. Required input for pre-training.

Returns

transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutputarrow-up-right or tuple(torch.FloatTensor)

A transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutputarrow-up-right or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (UniSpeechConfigarrow-up-right) and inputs.

  • loss (optional, returned when model is in train mode, torch.FloatTensor of shape (1,)) β€” Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paperarrow-up-right . (classification) loss.

  • projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) β€” Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.

  • projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) β€” Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The UniSpeechForPreTrainingarrow-up-right forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

Last updated