Vision Encoder Decoder Models
Vision Encoder Decoder Models
Overview
The VisionEncoderDecoderModel can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (e.g. ViT, BEiT, DeiT, Swin) and any pretrained language model as the decoder (e.g. RoBERTa, GPT2, BERT, DistilBERT).
The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
After such a VisionEncoderDecoderModel has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information).
An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to TrOCR, which is an instance of VisionEncoderDecoderModel.
Randomly initializing VisionEncoderDecoderModel from model configurations.
VisionEncoderDecoderModel can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default ViTModel configuration for the encoder and the default BertForCausalLM
configuration for the decoder.
Copied
Initialising VisionEncoderDecoderModel from a pretrained encoder and a pretrained decoder.
VisionEncoderDecoderModel can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, e.g. Swin, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing VisionEncoderDecoderModel from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post. To do so, the VisionEncoderDecoderModel
class provides a VisionEncoderDecoderModel.from_encoder_decoder_pretrained() method.
Copied
Loading an existing VisionEncoderDecoderModel checkpoint and perform inference.
To load fine-tuned checkpoints of the VisionEncoderDecoderModel
class, VisionEncoderDecoderModel provides the from_pretrained(...)
method just like any other model architecture in Transformers.
To perform inference, one uses the generate
method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
Copied
Loading a PyTorch checkpoint into TFVisionEncoderDecoderModel .
TFVisionEncoderDecoderModel.from_pretrained()
currently doesn’t support initializing the model from a PyTorch checkpoint. Passing from_pt=True
to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is:
Copied
Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: pixel_values
(which are the images) and labels
(which are the input_ids
of the encoded target sequence).
Copied
This model was contributed by nielsr. This model’s TensorFlow and Flax versions were contributed by ydshieh.
VisionEncoderDecoderConfig
class transformers.VisionEncoderDecoderConfig
( **kwargs )
Parameters
kwargs (optional) — Dictionary of keyword arguments. Notably:
encoder (PretrainedConfig, optional) — An instance of a configuration object that defines the encoder config.
decoder (PretrainedConfig, optional) — An instance of a configuration object that defines the decoder config.
VisionEncoderDecoderConfig is the configuration class to store the configuration of a VisionEncoderDecoderModel. It is used to instantiate a Vision-Encoder-Text-Decoder model according to the specified arguments, defining the encoder and decoder configs.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
Copied
from_encoder_decoder_configs
( encoder_config: PretrainedConfigdecoder_config: PretrainedConfig**kwargs ) → VisionEncoderDecoderConfig
Returns
An instance of a configuration object
Instantiate a VisionEncoderDecoderConfig (or a derived class) from a pre-trained encoder model configuration and decoder model configuration.
VisionEncoderDecoderModel
class transformers.VisionEncoderDecoderModel
( config: typing.Optional[transformers.configuration_utils.PretrainedConfig] = Noneencoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = Nonedecoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None )
Parameters
config (VisionEncoderDecoderConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.
After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
VisionEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one as decoder when created with the :meth~transformers.AutoModel.from_pretrained class method for the encoder and :meth~transformers.AutoModelForCausalLM.from_pretrained class method for the decoder.
forward
( pixel_values: typing.Optional[torch.FloatTensor] = Nonedecoder_input_ids: typing.Optional[torch.LongTensor] = Nonedecoder_attention_mask: typing.Optional[torch.BoolTensor] = Noneencoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = Nonepast_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = Nonedecoder_inputs_embeds: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.LongTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None**kwargs ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use AutoImageProcessor). See ViTImageProcessor.call() for details.decoder_input_ids (
torch.LongTensor
of shape(batch_size, target_sequence_length)
, optional) — Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If
past_key_values
is used, optionally only the lastdecoder_input_ids
have to be input (seepast_key_values
).For training,
decoder_input_ids
are automatically created by the model by shifting thelabels
to the right, replacing -100 by thepad_token_id
and prepending them with thedecoder_start_token_id
.decoder_attention_mask (
torch.BoolTensor
of shape(batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids
. Causal mask will also be used by default.encoder_outputs (
tuple(torch.FloatTensor)
, optional) — This tuple must consist of (last_hidden_state
, optional:hidden_states
, optional:attentions
)last_hidden_state
(torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.past_key_values (
tuple(tuple(torch.FloatTensor))
of lengthconfig.n_layers
with each tuple having 4 tensors of shape(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.If
past_key_values
are used, the user can optionally input only the lastdecoder_input_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of alldecoder_input_ids
of shape(batch_size, sequence_length)
.decoder_inputs_embeds (
torch.FloatTensor
of shape(batch_size, target_sequence_length, hidden_size)
, optional) — Optionally, instead of passingdecoder_input_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertdecoder_input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Labels for computing the masked language modeling loss for the decoder. Indices should be in[-100, 0, ..., config.vocab_size]
(seeinput_ids
docstring) Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
).output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) — If set toTrue
, the model will return a~utils.Seq2SeqLMOutput
instead of a plain tuple.kwargs (optional) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:
Without a prefix which will be input as
**encoder_kwargs
for the encoder forward function.With a decoder_ prefix which will be input as
**decoder_kwargs
for the decoder forward function.
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (VisionEncoderDecoderConfig) and inputs.
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss.logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.decoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The VisionEncoderDecoderModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
from_encoder_decoder_pretrained
( encoder_pretrained_model_name_or_path: str = Nonedecoder_pretrained_model_name_or_path: str = None*model_args**kwargs )
Parameters
encoder_pretrained_model_name_or_path (
str
, optional) — Information necessary to initiate the image encoder. Can be either:A string, the model id of a pretrained model hosted inside a model repo on boincai.com. An example is
google/vit-base-patch16-224-in21k
.A path to a directory containing model weights saved using save_pretrained(), e.g.,
./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
decoder_pretrained_model_name_or_path (
str
, optional, defaults toNone
) — Information necessary to initiate the text decoder. Can be either:A string, the model id of a pretrained model hosted inside a model repo on boincai.com. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using save_pretrained(), e.g.,
./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (remaining positional arguments, optional) — All remaning positional arguments will be passed to the underlying model’s
__init__
method.kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
).To update the encoder configuration, use the prefix encoder_ for each configuration parameter.
To update the decoder configuration, use the prefix decoder_ for each configuration parameter.
To update the parent model configuration, do not use a prefix for each configuration parameter.
Behaves differently depending on whether a
config
is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
The model is set in evaluation mode by default using model.eval()
(Dropout modules are deactivated). To train the model, you need to first set it back in training mode with model.train()
.
Example:
Copied
TFVisionEncoderDecoderModel
class transformers.TFVisionEncoderDecoderModel
( *args**kwargs )
Parameters
config (VisionEncoderDecoderConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.
After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TFVisionEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one of the base model classes as decoder when created with the from_pretrained() class method for the encoder and from_pretrained() class method for the decoder.
call
( pixel_values: np.ndarray | tf.Tensor | None = Nonedecoder_input_ids: np.ndarray | tf.Tensor | None = Nonedecoder_attention_mask: np.ndarray | tf.Tensor | None = Noneencoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = Nonepast_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = Nonedecoder_inputs_embeds: np.ndarray | tf.Tensor | None = Nonelabels: np.ndarray | tf.Tensor | None = Noneuse_cache: Optional[bool] = Noneoutput_attentions: Optional[bool] = Noneoutput_hidden_states: Optional[bool] = Nonereturn_dict: Optional[bool] = Nonetraining: bool = False**kwargs ) → transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
pixel_values (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
`Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape(batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using the vision’s model’s image processor. For example, using AutoImageProcessor. See ViTImageProcessor.call() for details.decoder_input_ids (
np.ndarray
ortf.Tensor
of shape(batch_size, target_sequence_length)
, optional) — Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If
past_key_values
is used, optionally only the lastdecoder_input_ids
have to be input (seepast_key_values
).Provide for sequence to sequence training to the decoder. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
decoder_attention_mask (
np.ndarray
ortf.Tensor
of shape(batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids
. Causal mask will also be used by default.encoder_outputs (
tuple(tuple(tf.Tensor)
, optional) — This tuple must consist of (last_hidden_state
, optional:hidden_states
, optional:attentions
)last_hidden_state
(tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.past_key_values (
tuple(tuple(tf.Tensor))
of lengthconfig.n_layers
with each tuple having 4 tensors of shape(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.If
past_key_values
are used, the user can optionally input only the lastdecoder_input_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of alldecoder_input_ids
of shape(batch_size, sequence_length)
.decoder_inputs_embeds (
np.ndarray
ortf.Tensor
of shape(batch_size, target_sequence_length, hidden_size)
, optional) — Optionally, instead of passingdecoder_input_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertdecoder_input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.labels (
np.ndarray
ortf.Tensor
of shape(batch_size, sequence_length)
, optional) — Labels for computing the masked language modeling loss for the decoder. Indices should be in[-100, 0, ..., config.vocab_size]
(seeinput_ids
docstring) Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
).output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) — If set toTrue
, the model will return a~utils.Seq2SeqLMOutput
instead of a plain tuple.training (
bool
, optional, defaults toFalse
) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).kwargs (optional) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:
Without a prefix which will be input as
**encoder_kwargs
for the encoder forward function.With a decoder_ prefix which will be input as
**decoder_kwargs
for the decoder forward function.
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (VisionEncoderDecoderConfig) and inputs.
loss (
tf.Tensor
of shape(n,)
, optional, where n is the number of non-masked labels, returned whenlabels
is provided) — Language modeling loss.logits (
tf.Tensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).past_key_values (
List[tf.Tensor]
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — List oftf.Tensor
of lengthconfig.n_layers
, with each tensor of shape(2, batch_size, num_heads, sequence_length, embed_size_per_head)
).Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see
past_key_values
input) to speed up sequential decoding.decoder_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFVisionEncoderDecoderModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
from_encoder_decoder_pretrained
( encoder_pretrained_model_name_or_path: str = Nonedecoder_pretrained_model_name_or_path: str = None*model_args**kwargs )
Parameters
encoder_pretrained_model_name_or_path (
str
, optional) — Information necessary to initiate the encoder. Can be either:A string, the model id of a pretrained model hosted inside a model repo on boincai.com. An example is
google/vit-base-patch16-224-in21k
.A path to a directory containing model weights saved using save_pretrained(), e.g.,
./my_model_directory/
.A path or url to a pytorch index checkpoint file (e.g,
./pt_model/
). In this case,encoder_from_pt
should be set toTrue
.
decoder_pretrained_model_name_or_path (
str
, optional, defaults to None) — Information necessary to initiate the decoder. Can be either:A string, the model id of a pretrained model hosted inside a model repo on boincai.com. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using save_pretrained(), e.g.,
./my_model_directory/
.A path or url to a pytorch checkpoint file (e.g,
./pt_model/
). In this case,decoder_from_pt
should be set toTrue
.
model_args (remaining positional arguments, optional) — All remaning positional arguments will be passed to the underlying model’s
__init__
method.kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
).To update the encoder configuration, use the prefix encoder_ for each configuration parameter.
To update the decoder configuration, use the prefix decoder_ for each configuration parameter.
To update the parent model configuration, do not use a prefix for each configuration parameter.
Behaves differently depending on whether a
config
is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
Example:
Copied
FlaxVisionEncoderDecoderModel
class transformers.FlaxVisionEncoderDecoderModel
( config: VisionEncoderDecoderConfiginput_shape: typing.Optional[typing.Tuple] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>_do_init: bool = True**kwargs )
Parameters
config (VisionEncoderDecoderConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (
jax.numpy.dtype
, optional, defaults tojax.numpy.float32
) — The data type of the computation. Can be one ofjax.numpy.float32
,jax.numpy.float16
(on GPUs) andjax.numpy.bfloat16
(on TPUs).This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given
dtype
.Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.
After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
FlaxVisionEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base vision model classes of the library as encoder module and another one as decoder module when created with the :meth~transformers.FlaxAutoModel.from_pretrained class method for the encoder and :meth~transformers.FlaxAutoModelForCausalLM.from_pretrained class method for the decoder.
__call__
( pixel_values: Arraydecoder_input_ids: typing.Optional[jax.Array] = Nonedecoder_attention_mask: typing.Optional[jax.Array] = Nonedecoder_position_ids: typing.Optional[jax.Array] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonetrain: bool = Falseparams: dict = Nonedropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (
jnp.ndarray
of shape(batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using the vision model’s image processor. For example, using AutoImageProcessor. See ViTImageProcessor.call() for details.decoder_input_ids (
jnp.ndarray
of shape(batch_size, target_sequence_length)
, optional) — Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
decoder_attention_mask (
jnp.ndarray
of shape(batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids
. Causal mask will also be used by default.decoder_position_ids (
jnp.ndarray
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range[0, config.decoder.max_position_embeddings - 1]
.output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) — If set toTrue
, the model will return a~utils.FlaxSeq2SeqLMOutput
instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (VisionEncoderDecoderConfig) and inputs.
logits (
jnp.ndarray
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).past_key_values (
tuple(tuple(jnp.ndarray))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Tuple oftuple(jnp.ndarray)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.decoder_hidden_states (
tuple(jnp.ndarray)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple ofjnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple ofjnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple ofjnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (
jnp.ndarray
of shape(batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(jnp.ndarray)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple ofjnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (
tuple(jnp.ndarray)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple ofjnp.ndarray
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxVisionEncoderDecoderModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
from_encoder_decoder_pretrained
( encoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = Nonedecoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = None*model_args**kwargs )
Parameters
encoder_pretrained_model_name_or_path (
Union[str, os.PathLike]
, optional) — Information necessary to initiate the encoder. Can be either:A string, the model id of a pretrained model hosted inside a model repo on boincai.com. An example is
google/vit-base-patch16-224-in21k
.A path to a directory containing model weights saved using save_pretrained(), e.g.,
./my_model_directory/
.
decoder_pretrained_model_name_or_path (
Union[str, os.PathLike]
, optional, defaults toNone
) — Information necessary to initiate the decoder. Can be either:A string, the model id of a pretrained model hosted inside a model repo on boincai.com. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using save_pretrained(), e.g.,
./my_model_directory/
.
model_args (remaining positional arguments, optional) — All remaning positional arguments will be passed to the underlying model’s
__init__
method.kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
).To update the encoder configuration, use the prefix encoder_ for each configuration parameter.
To update the decoder configuration, use the prefix decoder_ for each configuration parameter.
To update the parent model configuration, do not use a prefix for each configuration parameter.
Behaves differently depending on whether a
config
is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
Example:
Copied
Last updated