InstructBLIP
Last updated
Last updated
The InstructBLIP model was proposed in InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi. InstructBLIP leverages the BLIP-2 architecture for visual instruction tuning.
The abstract from the paper is the following:
General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.
Tips:
InstructBLIP uses the same architecture as BLIP-2 with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
InstructBLIP architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
( vision_config = Noneqformer_config = Nonetext_config = Nonenum_query_tokens = 32**kwargs )
Parameters
vision_config (dict
, optional) — Dictionary of configuration options used to initialize InstructBlipVisionConfig.
qformer_config (dict
, optional) — Dictionary of configuration options used to initialize InstructBlipQFormerConfig.
text_config (dict
, optional) — Dictionary of configuration options used to initialize any PretrainedConfig.
num_query_tokens (int
, optional, defaults to 32) — The number of query tokens passed through the Transformer.
kwargs (optional) — Dictionary of keyword arguments.
InstructBlipConfig is the configuration class to store the configuration of a InstructBlipForConditionalGeneration. It is used to instantiate a InstructBLIP model according to the specified arguments, defining the vision model, Q-Former model and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the InstructBLIP Salesforce/instruct-blip-flan-t5 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
from_vision_qformer_text_configs
( vision_config: InstructBlipVisionConfigqformer_config: InstructBlipQFormerConfigtext_config: PretrainedConfig**kwargs ) → InstructBlipConfig
Returns
An instance of a configuration object
Instantiate a InstructBlipConfig (or a derived class) from a InstructBLIP vision model, Q-Former and language model configurations.
( hidden_size = 1408intermediate_size = 6144num_hidden_layers = 39num_attention_heads = 16image_size = 224patch_size = 14hidden_act = 'gelu'layer_norm_eps = 1e-06attention_dropout = 0.0initializer_range = 1e-10qkv_bias = True**kwargs )
Parameters
hidden_size (int
, optional, defaults to 1408) — Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int
, optional, defaults to 6144) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int
, optional, defaults to 39) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int
, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.
image_size (int
, optional, defaults to 224) — The size (resolution) of each image.
patch_size (int
, optional, defaults to 14) — The size (resolution) of each patch.
hidden_act (str
or function
, optional, defaults to "gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu"
, "relu"
, "selu"
and "gelu_new"
`"gelu"
are supported. to 1e-5): The epsilon used by the layer normalization layers.
layer_norm_eps (float
, optional, defaults to 1e-6) — The epsilon used by the layer normalization layers.
attention_dropout (float
, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
initializer_range (float
, optional, defaults to 1e-10) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
qkv_bias (bool
, optional, defaults to True
) — Whether to add a bias to the queries and values in the self-attention layers.
This is the configuration class to store the configuration of a InstructBlipVisionModel. It is used to instantiate a InstructBLIP vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the InstructBLIP Salesforce/instruct-blip-flan-t5 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
( vocab_size = 30522hidden_size = 768num_hidden_layers = 12num_attention_heads = 12intermediate_size = 3072hidden_act = 'gelu'hidden_dropout_prob = 0.1attention_probs_dropout_prob = 0.1max_position_embeddings = 512initializer_range = 0.02layer_norm_eps = 1e-12pad_token_id = 0position_embedding_type = 'absolute'cross_attention_frequency = 2encoder_hidden_size = 1408**kwargs )
Parameters
vocab_size (int
, optional, defaults to 30522) — Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by the inputs_ids
passed when calling the model.
hidden_size (int
, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int
, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int
, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int
, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str
or Callable
, optional, defaults to "gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu"
, "relu"
, "silu"
and "gelu_new"
are supported.
hidden_dropout_prob (float
, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float
, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
max_position_embeddings (int
, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
initializer_range (float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float
, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
position_embedding_type (str
, optional, defaults to "absolute"
) — Type of position embedding. Choose one of "absolute"
, "relative_key"
, "relative_key_query"
. For positional embeddings use "absolute"
. For more information on "relative_key"
, please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query"
, please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).
cross_attention_frequency (int
, optional, defaults to 2) — The frequency of adding cross-attention to the Transformer layers.
encoder_hidden_size (int
, optional, defaults to 1408) — The hidden size of the hidden states for cross-attention.
This is the configuration class to store the configuration of a InstructBlipQFormerModel. It is used to instantiate a InstructBLIP Querying Transformer (Q-Former) model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the InstructBLIP Salesforce/instruct-blip-flan-t5 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Note that InstructBlipQFormerModel is very similar to BertLMHeadModel with interleaved cross-attention.
Examples:
Copied
( image_processortokenizerqformer_tokenizer )
Parameters
image_processor (BlipImageProcessor
) — An instance of BlipImageProcessor. The image processor is a required input.
tokenizer (AutoTokenizer
) — An instance of [‘PreTrainedTokenizer`]. The tokenizer is a required input.
qformer_tokenizer (AutoTokenizer
) — An instance of [‘PreTrainedTokenizer`]. The Q-Former tokenizer is a required input.
Constructs an InstructBLIP processor which wraps a BLIP image processor and a LLaMa/T5 tokenizer into a single processor.
InstructBlipProcessor offers all the functionalities of BlipImageProcessor and AutoTokenizer. See the docstring of __call__()
and decode() for more information.
batch_decode
( *args**kwargs )
This method forwards all its arguments to PreTrainedTokenizer’s batch_decode(). Please refer to the docstring of this method for more information.
decode
( *args**kwargs )
This method forwards all its arguments to PreTrainedTokenizer’s decode(). Please refer to the docstring of this method for more information.
( config: InstructBlipVisionConfig )
forward
( pixel_values: typing.Optional[torch.FloatTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using InstructBlipProcessor. See InstructBlipProcessor.__call__()
for details.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
return_dict (bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (<class 'transformers.models.instructblip.configuration_instructblip.InstructBlipVisionConfig'>
) and inputs.
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor
of shape (batch_size, hidden_size)
) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The InstructBlipVisionModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
( config: InstructBlipQFormerConfig )
Querying Transformer (Q-Former), used in InstructBLIP. Slightly modified from BLIP-2 as it also takes the instruction as input.
forward
( input_ids: LongTensorattention_mask: typing.Optional[torch.FloatTensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Nonequery_embeds: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.FloatTensor] = Noneencoder_hidden_states: typing.Optional[torch.FloatTensor] = Noneencoder_attention_mask: typing.Optional[torch.FloatTensor] = Nonepast_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None )
encoder_hidden_states (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor
of shape (batch_size, sequence_length)
, optional): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor))
of length config.n_layers
with each tuple having 4 tensors of: shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)
): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all decoder_input_ids
of shape (batch_size, sequence_length)
. use_cache (bool
, optional): If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see past_key_values
).
( config: InstructBlipConfig )
Parameters
config (InstructBlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
InstructBLIP Model for generating text given an image and an optional text prompt. The model consists of a vision encoder, Querying Transformer (Q-Former) and a language model.
One can optionally pass input_ids
to the model, which serve as a text prompt, to make the language model continue the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( pixel_values: FloatTensorqformer_input_ids: FloatTensorqformer_attention_mask: typing.Optional[torch.LongTensor] = Noneinput_ids: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.LongTensor] = Nonedecoder_input_ids: typing.Optional[torch.LongTensor] = Nonedecoder_attention_mask: typing.Optional[torch.LongTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonelabels: typing.Optional[torch.LongTensor] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.models.instructblip.modeling_instructblip.InstructBlipForConditionalGenerationModelOutput
or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using InstructBlipProcessor. See InstructBlipProcessor.__call__()
for details.
qformer_input_ids (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Indices of input sequence tokens in the vocabulary of the Q-Former. Input tokens can optionally be provided to serve as text prompt, which the Q-Former model will encode.
Indices can be obtained using InstructBlipProcessor. See InstructBlipProcessor.__call__()
for details.
qformer_attention_mask (torch.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
input_ids (torch.LongTensor
of shape (batch_size, sequence_length)
, optional) — Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be provided to serve as text prompt, which the language model can continue.
Indices can be obtained using InstructBlipProcessor. See InstructBlipProcessor.__call__()
for details.
attention_mask (torch.Tensor
of shape (batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
1 for tokens that are not masked,
0 for tokens that are masked.
decoder_input_ids (torch.LongTensor
of shape (batch_size, target_sequence_length)
, optional) — Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an encoder-decoder language model (like T5) is used.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs?
decoder_attention_mask (torch.BoolTensor
of shape (batch_size, target_sequence_length)
, optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids
. Causal mask will also be used by default.
Only relevant in case an encoder-decoder language model (like T5) is used.
output_attentions (bool
, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail.
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
return_dict (bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor
of shape (batch_size,)
, optional) — Labels for computing the language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]
. All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.models.instructblip.modeling_instructblip.InstructBlipForConditionalGenerationModelOutput
or tuple(torch.FloatTensor)
A transformers.models.instructblip.modeling_instructblip.InstructBlipForConditionalGenerationModelOutput
or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration (<class 'transformers.models.instructblip.configuration_instructblip.InstructBlipVisionConfig'>
) and inputs.
loss (torch.FloatTensor
, optional, returned when labels
is provided, torch.FloatTensor
of shape (1,)
) — Language modeling loss from the language model.
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head of the language model.
vision_outputs (BaseModelOutputWithPooling
) — Outputs of the vision encoder.
qformer_outputs (BaseModelOutputWithPoolingAndCrossAttentions
) — Outputs of the Q-Former (Querying Transformer).
language_model_outputs (CausalLMOutputWithPast
or Seq2SeqLMOutput
) — Outputs of the language model.
The InstructBlipForConditionalGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
generate
( pixel_values: FloatTensorqformer_input_ids: typing.Optional[torch.LongTensor] = Noneqformer_attention_mask: typing.Optional[torch.LongTensor] = Noneinput_ids: typing.Optional[torch.LongTensor] = Noneattention_mask: typing.Optional[torch.LongTensor] = None**generate_kwargs ) → captions (list)
Parameters
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)) — Input images to be processed.
qformer_input_ids (torch.LongTensor
of shape (batch_size, sequence_length), optional) — The sequence used as a prompt to be fed to the Q-Former module.
qformer_attention_mask (torch.LongTensor
of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices.
input_ids (torch.LongTensor
of shape (batch_size, sequence_length), optional) — The sequence used as a prompt for the generation.
attention_mask (torch.LongTensor
of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices.
Returns
captions (list)
A list of strings of length batch_size * num_captions.
Overrides generate
function to be able to use the model as a conditional generator.