Vision Text Dual Encoder
Last updated
Last updated
The can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (e.g. , , ) and any pretrained text autoencoding model as the text encoder (e.g. , ). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval.
In it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval.
( projection_dim = 512logit_scale_init_value = 2.6592**kwargs )
Parameters
text_config (dict
) — Dictionary of configuration options that defines text model config.
vision_config (dict
) — Dictionary of configuration options that defines vison model config.
projection_dim (int
, optional, defaults to 512) — Dimentionality of text and vision projection layers.
logit_scale_init_value (float
, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original CLIP implementation.
kwargs (optional) — Dictionary of keyword arguments.
is the configuration class to store the configuration of a . It is used to instantiate model according to the specified arguments, defining the text model and vision model configs.
Examples:
Copied
from_vision_text_configs
Returns
An instance of a configuration object
( image_processor = Nonetokenizer = None**kwargs )
Parameters
Constructs a VisionTextDualEncoder processor which wraps an image processor and a tokenizer into a single processor.
batch_decode
( *args**kwargs )
decode
( *args**kwargs )
( config: typing.Optional[transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig] = Nonevision_model: typing.Optional[transformers.modeling_utils.PreTrainedModel] = Nonetext_model: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None )
Parameters
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
forward
( input_ids: typing.Optional[torch.LongTensor] = Nonepixel_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Nonereturn_loss: typing.Optional[bool] = Nonetoken_type_ids: typing.Optional[torch.LongTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.models.clip.modeling_clip.CLIPOutput
or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor
of shape (batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
attention_mask (torch.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
position_ids (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]
.
return_loss (bool
, optional) — Whether or not to return the contrastive loss.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
transformers.models.clip.modeling_clip.CLIPOutput
or tuple(torch.FloatTensor)
loss (torch.FloatTensor
of shape (1,)
, optional, returned when return_loss
is True
) — Contrastive loss for image-text similarity.
logits_per_image:(torch.FloatTensor
of shape (image_batch_size, text_batch_size)
) — The scaled dot product scores between image_embeds
and text_embeds
. This represents the image-text similarity scores.
logits_per_text:(torch.FloatTensor
of shape (text_batch_size, image_batch_size)
) — The scaled dot product scores between text_embeds
and image_embeds
. This represents the text-image similarity scores.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
( config: VisionTextDualEncoderConfiginput_shape: typing.Optional[typing.Tuple] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>_do_init: bool = True**kwargs )
Parameters
dtype (jax.numpy.dtype
, optional, defaults to jax.numpy.float32
) — The data type of the computation. Can be one of jax.numpy.float32
, jax.numpy.float16
(on GPUs) and jax.numpy.bfloat16
(on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype
.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
Finally, this model supports inherent JAX features such as:
__call__
( input_idspixel_valuesattention_mask = Noneposition_ids = Nonetoken_type_ids = Noneparams: dict = Nonedropout_rng: PRNGKey = Nonetrain: bool = Falseoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput
or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray
of shape (batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
attention_mask (torch.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
position_ids (numpy.ndarray
of shape (batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]
.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput
or tuple(torch.FloatTensor)
logits_per_image:(jnp.ndarray
of shape (image_batch_size, text_batch_size)
) — The scaled dot product scores between image_embeds
and text_embeds
. This represents the image-text similarity scores.
logits_per_text:(jnp.ndarray
of shape (text_batch_size, image_batch_size)
) — The scaled dot product scores between text_embeds
and image_embeds
. This represents the text-image similarity scores.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
( *args**kwargs )
Parameters
After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
call
( input_ids: tf.Tensor | None = Nonepixel_values: tf.Tensor | None = Noneattention_mask: tf.Tensor | None = Noneposition_ids: tf.Tensor | None = Nonereturn_loss: Optional[bool] = Nonetoken_type_ids: tf.Tensor | None = Noneoutput_attentions: Optional[bool] = Noneoutput_hidden_states: Optional[bool] = Nonereturn_dict: Optional[bool] = Nonetraining: bool = False ) → transformers.models.clip.modeling_tf_clip.TFCLIPOutput
or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor
of shape (batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
attention_mask (tf.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
position_ids (tf.Tensor
of shape (batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]
.
return_loss (bool
, optional) — Whether or not to return the contrastive loss.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
transformers.models.clip.modeling_tf_clip.TFCLIPOutput
or tuple(tf.Tensor)
loss (tf.Tensor
of shape (1,)
, optional, returned when return_loss
is True
) — Contrastive loss for image-text similarity.
logits_per_image:(tf.Tensor
of shape (image_batch_size, text_batch_size)
) — The scaled dot product scores between image_embeds
and text_embeds
. This represents the image-text similarity scores.
logits_per_text:(tf.Tensor
of shape (text_batch_size, image_batch_size)
) — The scaled dot product scores between text_embeds
and image_embeds
. This represents the text-image similarity scores.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.
( vision_config: PretrainedConfigtext_config: PretrainedConfig**kwargs ) →
Instantiate a (or a derived class) from text model configuration and vision model configuration.
image_processor () — The image processor is a required input.
tokenizer () — The tokenizer is a required input.
offers all the functionalities of and . See the __call__()
and for more information.
This method forwards all its arguments to VisionTextDualEncoderTokenizer’s . Please refer to the docstring of this method for more information.
This method forwards all its arguments to VisionTextDualEncoderTokenizer’s . Please refer to the docstring of this method for more information.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.
In it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Indices can be obtained using . See and for details.
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use ). See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
A transformers.models.clip.modeling_clip.CLIPOutput
or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
text_embeds(torch.FloatTensor
of shape (batch_size, output_dim
) — The text embeddings obtained by applying the projection layer to the pooled output of .
image_embeds(torch.FloatTensor
of shape (batch_size, output_dim
) — The image embeddings obtained by applying the projection layer to the pooled output of .
text_model_output(BaseModelOutputWithPooling
): The output of the .
vision_model_output(BaseModelOutputWithPooling
): The output of the .
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
If you wish to change the dtype of the model parameters, see and .
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.
In it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Indices can be obtained using . See and for details.
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use ). See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
A transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput
or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
text_embeds(jnp.ndarray
of shape (batch_size, output_dim
) — The text embeddings obtained by applying the projection layer to the pooled output of .
image_embeds(jnp.ndarray
of shape (batch_size, output_dim
) — The image embeddings obtained by applying the projection layer to the pooled output of .
text_model_output(FlaxBaseModelOutputWithPooling
): The output of the .
vision_model_output(FlaxBaseModelOutputWithPooling
): The output of the .
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.
In it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Keras subclass. Use it as a regular Keras Model and refer to the TF documentation for all matter related to general usage and behavior.
Indices can be obtained using . See and for details.
pixel_values (tf.Tensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use ). See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
A transformers.models.clip.modeling_tf_clip.TFCLIPOutput
or a tuple of tf.Tensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
text_embeds(tf.Tensor
of shape (batch_size, output_dim
) — The text embeddings obtained by applying the projection layer to the pooled output of .
image_embeds(tf.Tensor
of shape (batch_size, output_dim
) — The image embeddings obtained by applying the projection layer to the pooled output of .
text_model_output(~modeling_tf_utils.TFBaseModelOutputWithPooling
): The output of the .
vision_model_output(~modeling_tf_utils.TFBaseModelOutputWithPooling
): The output of the .
The forward method, overrides the __call__
special method.