MusicGen
Last updated
Last updated
The MusicGen model was proposed in the paper by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez.
MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or audio codes, conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec, to recover the audio waveform.
Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g. hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass.
The abstract from the paper is the following:
We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.
This model was contributed by . The original code can be found . The pre-trained checkpoints can be found on the .
MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default, and can be explicitly specified by setting do_sample=True
in the call to MusicgenForConditionalGeneration.generate()
, or by overriding the model’s generation config (see below).
Generation is limited by the sinusoidal positional embeddings to 30 second inputs. Meaning, MusicGen cannot generate more than 30 seconds of audio (1503 tokens), and input audio passed by Audio-Prompted Generation contributes to this limit so, given an input of 20 seconds of audio, MusicGen cannot generate more than 10 seconds of additional audio.
The inputs for unconditional (or ‘null’) generation can be obtained through the method MusicgenForConditionalGeneration.get_unconditional_inputs()
:
Copied
The audio outputs are a three-dimensional Torch tensor of shape (batch_size, num_channels, sequence_length)
. To listen to the generated audio samples, you can either play them in an ipynb notebook:
Copied
Or save them as a .wav
file using a third-party library, e.g. scipy
:
Copied
Copied
The guidance_scale
is used in classifier free guidance (CFG), setting the weighting between the conditional logits (which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or ‘null’ prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer audio quality. CFG is enabled by setting guidance_scale > 1
. For best results, use guidance_scale=3
(default).
Copied
Copied
Copied
The default parameters that control the generation process, such as sampling, guidance scale and number of generated tokens, can be found in the model’s generation config, and updated as desired:
Copied
Note that any arguments passed to the generate method will supersede those in the generation config, so setting do_sample=False
in the call to generate will supersede the setting of model.generation_config.do_sample
in the generation config.
The MusicGen model can be de-composed into three distinct stages:
Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations
Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder
Copied
Tips:
MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model.
Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable do_sample
in the call to MusicgenForConditionalGeneration.generate()
( vocab_size = 2048max_position_embeddings = 2048num_hidden_layers = 24ffn_dim = 4096num_attention_heads = 16layerdrop = 0.0use_cache = Trueactivation_function = 'gelu'hidden_size = 1024dropout = 0.1attention_dropout = 0.0activation_dropout = 0.0initializer_factor = 0.02scale_embedding = Falsenum_codebooks = 4pad_token_id = 2048bos_token_id = 2048eos_token_id = Nonetie_word_embeddings = False**kwargs )
Parameters
vocab_size (int
, optional, defaults to 2048) — Vocabulary size of the MusicgenDecoder model. Defines the number of different tokens that can be represented by the inputs_ids
passed when calling MusicgenDecoder
.
hidden_size (int
, optional, defaults to 1024) — Dimensionality of the layers and the pooler layer.
num_hidden_layers (int
, optional, defaults to 24) — Number of decoder layers.
num_attention_heads (int
, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer block.
ffn_dim (int
, optional, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer block.
activation_function (str
or function
, optional, defaults to "gelu"
) — The non-linear activation function (function or string) in the decoder and pooler. If string, "gelu"
, "relu"
, "silu"
and "gelu_new"
are supported.
dropout (float
, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler.
attention_dropout (float
, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
activation_dropout (float
, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (int
, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically, set this to something large just in case (e.g., 512 or 1024 or 2048).
initializer_factor (float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_embedding (bool
, optional, defaults to False
) — Scale embeddings by diving by sqrt(hidden_size).
use_cache (bool
, optional, defaults to True
) — Whether the model should return the last key/values attentions (not used by all models)
num_codebooks (int
, optional, defaults to 4) — The number of parallel codebooks forwarded to the model.
tie_word_embeddings(bool
, optional, defaults to False
) — Whether input and output word embeddings should be tied.
( **kwargs )
Parameters
kwargs (optional) — Dictionary of keyword arguments. Notably:
Example:
Copied
from_sub_models_config
Returns
An instance of a configuration object
( feature_extractortokenizer )
Parameters
Constructs a MusicGen processor which wraps an EnCodec feature extractor and a T5 tokenizer into a single processor class.
batch_decode
( *args**kwargs )
decode
( *args**kwargs )
( config: MusicgenDecoderConfig )
Parameters
The bare Musicgen decoder model outputting raw hidden-states without any specific head on top.
forward
( input_ids: LongTensor = Noneattention_mask: typing.Optional[torch.Tensor] = Noneencoder_hidden_states: typing.Optional[torch.FloatTensor] = Noneencoder_attention_mask: typing.Optional[torch.LongTensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Nonecross_attn_head_mask: typing.Optional[torch.Tensor] = Nonepast_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = Noneinputs_embeds: typing.Optional[torch.FloatTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None )
Parameters
input_ids (torch.LongTensor
of shape (batch_size * num_codebooks, sequence_length)
) — Indices of input sequence tokens in the vocabulary, corresponding to the sequence of audio codes.
attention_mask (torch.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
encoder_hidden_states (torch.FloatTensor
of shape (batch_size, encoder_sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (torch.LongTensor
of shape (batch_size, encoder_sequence_length)
, optional) — Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
head_mask (torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) — Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) — Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing cross-attention on hidden heads. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all decoder_input_ids
of shape (batch_size, sequence_length)
. inputs_embeds (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional): Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
( config: MusicgenDecoderConfig )
Parameters
The MusicGen decoder model with a language modelling head on top.
forward
( input_ids: LongTensor = Noneattention_mask: typing.Optional[torch.Tensor] = Noneencoder_hidden_states: typing.Optional[torch.FloatTensor] = Noneencoder_attention_mask: typing.Optional[torch.LongTensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Nonecross_attn_head_mask: typing.Optional[torch.Tensor] = Nonepast_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = Noneinputs_embeds: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.LongTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None )
Parameters
input_ids (torch.LongTensor
of shape (batch_size * num_codebooks, sequence_length)
) — Indices of input sequence tokens in the vocabulary, corresponding to the sequence of audio codes.
attention_mask (torch.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
encoder_hidden_states (torch.FloatTensor
of shape (batch_size, encoder_sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (torch.LongTensor
of shape (batch_size, encoder_sequence_length)
, optional) — Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
head_mask (torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) — Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) — Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing cross-attention on hidden heads. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all decoder_input_ids
of shape (batch_size, sequence_length)
. inputs_embeds (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional): Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
labels (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids
Indices are selected in [-100, 0, ..., config.vocab_size]
All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss.
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
( config: typing.Optional[transformers.models.musicgen.configuration_musicgen.MusicgenConfig] = Nonetext_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = Noneaudio_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = Nonedecoder: typing.Optional[transformers.models.musicgen.modeling_musicgen.MusicgenForCausalLM] = None )
Parameters
The composite MusicGen model with a text encoder, audio encoder and Musicgen decoder,for music generation tasks with one or both of text and audio prompts.
forward
Parameters
input_ids (torch.LongTensor
of shape (batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
attention_mask (torch.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
decoder_input_ids (torch.LongTensor
of shape (batch_size * num_codebooks, target_sequence_length)
, optional) — Indices of decoder input sequence tokens in the vocabulary, corresponding to the sequence of audio codes.
decoder_attention_mask (torch.LongTensor
of shape (batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids
. Causal mask will also be used by default.
head_mask (torch.Tensor
of shape (encoder_layers, encoder_attention_heads)
, optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]
:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor)
, optional) — Tuple consists of (last_hidden_state
, optional: hidden_states
, optional: attentions
) last_hidden_state
of shape (batch_size, sequence_length, hidden_size)
, optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all decoder_input_ids
of shape (batch_size, sequence_length)
. inputs_embeds (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional): Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor
of shape (batch_size, target_sequence_length, hidden_size)
, optional) — Optionally, instead of passing decoder_input_ids
you can choose to directly pass an embedded representation. If past_key_values
is used, optionally only the last decoder_inputs_embeds
have to be input (see past_key_values
). This is useful if you want more control over how to convert decoder_input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids
and decoder_inputs_embeds
are both unset, decoder_inputs_embeds
takes the value of inputs_embeds
.
use_cache (bool
, optional) — If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see past_key_values
).
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss.
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
The model can generate an audio sample conditioned on a text prompt through use of the to pre-process the inputs:
The same can be used to pre-process an audio prompt that is used for audio continuation. In the following example, we load an audio file using the 🌍Datasets library, which can be pip installed through the command below:
For batched audio-prompted generation, the generated audio_values
can be post-processed to remove padding by using the class:
Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class , or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class . If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first specifying the correct config, or be accessed through the .decoder
attribute of the composite model:
Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can be combined with the frozen text encoder and audio encoder/decoders to recover the composite model.
layerdrop (float
, optional, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see ) for more details.
This is the configuration class to store the configuration of an MusicgenDecoder
. It is used to instantiate a MusicGen decoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MusicGen architecture.
Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.
text_encoder (, optional) — An instance of a configuration object that defines the text encoder config.
audio_encoder (, optional) — An instance of a configuration object that defines the audio encoder config.
decoder (, optional) — An instance of a configuration object that defines the decoder config.
This is the configuration class to store the configuration of a . It is used to instantiate a MusicGen model according to the specified arguments, defining the text encoder, audio encoder and MusicGen decoder configs.
Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.
( text_encoder_config: PretrainedConfigaudio_encoder_config: PretrainedConfigdecoder_config: MusicgenDecoderConfig**kwargs ) →
Instantiate a (or a derived class) from text encoder, audio encoder and decoder configurations.
feature_extractor (EncodecFeatureExtractor
) — An instance of . The feature extractor is a required input.
tokenizer (T5Tokenizer
) — An instance of . The tokenizer is a required input.
offers all the functionalities of and TTokenizer
. See __call__()
and for more information.
This method is used to decode either batches of audio outputs from the MusicGen model, or batches of token ids from the tokenizer. In the case of decoding token ids, this method forwards all its arguments to T5Tokenizer’s . Please refer to the docstring of this method for more information.
This method forwards all its arguments to T5Tokenizer’s . Please refer to the docstring of this method for more information.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
The Musicgen model was proposed in by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Indices can be obtained by encoding an audio prompt with an audio encoder model to predict audio codes, such as with the . See for details.
The input_ids
will automatically be converted from shape (batch_size * num_codebooks, target_sequence_length)
to (batch_size, num_codebooks, target_sequence_length)
in the forward pass. If you obtain audio codes from an audio encoding model, such as , ensure that the number of frames is equal to 1, and that you reshape the audio codes from (frames, batch_size, num_codebooks, target_sequence_length)
to (batch_size * num_codebooks, target_sequence_length)
prior to passing them as input_ids
.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
The Musicgen model was proposed in by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Indices can be obtained by encoding an audio prompt with an audio encoder model to predict audio codes, such as with the . See for details.
The input_ids
will automatically be converted from shape (batch_size * num_codebooks, target_sequence_length)
to (batch_size, num_codebooks, target_sequence_length)
in the forward pass. If you obtain audio codes from an audio encoding model, such as , ensure that the number of frames is equal to 1, and that you reshape the audio codes from (frames, batch_size, num_codebooks, target_sequence_length)
to (batch_size * num_codebooks, target_sequence_length)
prior to passing them as input_ids
.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
Returns: or tuple(torch.FloatTensor)
: A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
The Musicgen model was proposed in by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an encoder decoder transformer trained on the task of conditional music generation
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: typing.Optional[torch.LongTensor] = Noneattention_mask: typing.Optional[torch.BoolTensor] = Noneinput_values: typing.Optional[torch.FloatTensor] = Nonepadding_mask: typing.Optional[torch.BoolTensor] = Nonedecoder_input_ids: typing.Optional[torch.LongTensor] = Nonedecoder_attention_mask: typing.Optional[torch.BoolTensor] = Noneencoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = Nonepast_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = Noneinputs_embeds: typing.Optional[torch.FloatTensor] = Nonedecoder_inputs_embeds: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.LongTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None**kwargs ) → or tuple(torch.FloatTensor)
Indices can be obtained using . See and for details.
Indices can be obtained by encoding an audio prompt with an audio encoder model to predict audio codes, such as with the . See for details.
The decoder_input_ids
will automatically be converted from shape (batch_size * num_codebooks, target_sequence_length)
to (batch_size, num_codebooks, target_sequence_length)
in the forward pass. If you obtain audio codes from an audio encoding model, such as , ensure that the number of frames is equal to 1, and that you reshape the audio codes from (frames, batch_size, num_codebooks, target_sequence_length)
to (batch_size * num_codebooks, target_sequence_length)
prior to passing them as decoder_input_ids
.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.