FLAVA
FLAVA
Overview
The FLAVA model was proposed in FLAVA: A Foundational Language And Vision Alignment Model by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022.
The paper aims at creating a single unified foundation model which can work across vision, language as well as vision-and-language multimodal tasks.
The abstract from the paper is the following:
State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal (with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising direction would be to use a single holistic universal model, as a βfoundationβ, that targets all modalities at once β a true vision and language foundation model should be good at vision tasks, language tasks, and cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate impressive performance on a wide range of 35 tasks spanning these target modalities.
This model was contributed by aps. The original code can be found here.
FlavaConfig
class transformers.FlavaConfig
( image_config: typing.Dict[str, typing.Any] = Nonetext_config: typing.Dict[str, typing.Any] = Nonemultimodal_config: typing.Dict[str, typing.Any] = Noneimage_codebook_config: typing.Dict[str, typing.Any] = Nonehidden_size: int = 768layer_norm_eps: float = 1e-12projection_dim: int = 768init_codebook: bool = Truelogit_scale_init_value: float = 2.6592initializer_range: float = 0.02ce_ignore_index: int = -100mim_weight: float = 1.0mlm_weight: float = 1.0global_contrastive_weight: float = 1.0itm_weight: float = 1.0mmm_image_weight: float = 1.0mmm_text_weight: float = 1.0global_backprop_contrastive: bool = Trueskip_unmasked_multimodal_encoder: bool = Truereturn_loss: bool = True**kwargs )
Parameters
text_config (
dict, optional) β Dictionary of configuration options used to initialize FlavaTextConfig.image_config (
dict, optional) β Dictionary of configuration options used to initialize FlavaImageConfig.multimodal_config (
dict, optional) β Dictionary of configuration options used to initialize FlavaMultimodalConfig.hidden_size (
int, optional, defaults to 768) β Dimensionality of the encoder layers and the pooler layer.layer_norm_eps (
float, optional, defaults to 1e-12) β The epsilon used by the layer normalization layers.projection_dim (
int, optional, defaults to 512) β Dimentionality of text and image projection layers.logit_scale_init_value (
float, optional, defaults to 2.6592) β The inital value of the logit_scale paramter. Default is used as per the original FLAVA/CLIP implementation.initializer_range (
float, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.ce_ignore_index (
int, optional, defaults to -100) β Cross entropy index to ignore.mim_weight (
float, optional, defaults to 1.0) β Weight to be assigned to MIM (Masked Image Modeling) unimodal lossmlm_weight (
float, optional, defaults to 1.0) β Weight to be assigned to MLM (Masked Language Modeling) unimodal lossglobal_contrastive_weight (
float, optional, defaults to 1.0) β Weight to be assigned to global contrastive cross-alignment loss.itm_weight (
float, optional, defaults to 1.0) β Weight to be assigned to image-text matching multimodal loss.mmm_image_weight (
float, optional, defaults to 1.0) β Weight to be assigned to MMM lossβs image part.mmm_text_weight (
float, optional, defaults to 1.0) β Weight to be assigned to MMM lossβs text part.global_backprop_contrastive (
bool, optional, defaults toTrue) β Whether to use global backpropgation through all workers in contrastive loss.skip_unmasked_multimodal_encoder (
bool, optional, defaults toTrue) β Whether to skip running unmasked multimodal encoder whose outputs are not used by FLAVA losses.return_loss (
bool, optional, defaults toTrue) β Whether to return loss or notkwargs (optional) β Dictionary of keyword arguments.
FlavaConfig is the configuration class to store the configuration of a FlavaModel. It is used to instantiate FLAVA model according to the specified arguments, defining the text model, image model, image codebook and multimodal model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
>>> from transformers import FlavaConfig, FlavaModel, FlavaForPreTraining
>>> # Initializing a FlavaConfig with style configuration
>>> configuration = FlavaConfig()
>>> # Initializing a FlavaModel and FlavaForPreTraining model (with random weights) from the style configuration
>>> model = FlavaModel(configuration)
>>> model_pre = FlavaForPreTraining(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> configuration_pre = model_pre.configfrom_configs
( image_config: FlavaImageConfigtext_config: FlavaTextConfigmultimodal_config: FlavaMultimodalConfigimage_codebook_config: FlavaImageCodebookConfig**kwargs ) β FlavaConfig
Returns
An instance of a configuration object
Instantiate a FlavaConfig (or a derived class) from flava text model configuration, flava image model configuration, flava multimodal model and flava codebook model configuration.
FlavaTextConfig
class transformers.FlavaTextConfig
( vocab_size: int = 30522type_vocab_size: int = 2max_position_embeddings: int = 512position_embedding_type: str = 'absolute'hidden_size: int = 768num_hidden_layers: int = 12num_attention_heads: int = 12intermediate_size: int = 3072hidden_act: str = 'gelu'hidden_dropout_prob: float = 0.0attention_probs_dropout_prob: float = 0.0initializer_range: float = 0.02layer_norm_eps: float = 1e-12pad_token_id: int = 0qkv_bias: bool = True**kwargs )
Parameters
vocab_size (
int, optional, defaults to 30522) β Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by theinputs_idspassed when calling FlavaTextModel.type_vocab_size (
int, optional, defaults to 2) β The vocabulary size of thetoken_type_idspassed when calling FlavaTextModel. Note that even though text encoder allowstoken_type_idsβs value as 2, for text-only pretraining and fine-tuning, only 1 is used similar to RoBERTa.max_position_embeddings (
int, optional, defaults to 512) β The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). For VL, max_length passed to model is 77.position_embedding_type (
str, optional, defaults to"absolute") β Type of position embedding. Choose one of"absolute","relative_key","relative_key_query". For positional embeddings use"absolute". For more information on"relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on"relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).hidden_size (
int, optional, defaults to 768) β Dimensionality of the encoder layers and the pooler layer.num_hidden_layers (
int, optional, defaults to 12) β Number of hidden layers in the Transformer encoder.num_attention_heads (
int, optional, defaults to 12) β Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int, optional, defaults to 3072) β Dimensionality of the βintermediateβ (i.e., feed-forward) layer in the Transformer encoder.hidden_act (
strorfunction, optional, defaults to"gelu") β The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu","relu","selu"and"gelu_new"are supported.hidden_dropout_prob (
float, optional, defaults to 0.1) β The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.attention_probs_dropout_prob (
float, optional, defaults to 0.1) β The dropout ratio for the attention probabilities.initializer_range (
float, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.layer_norm_eps (
float, optional, defaults to 1e-12) β The epsilon used by the layer normalization layers.image_size (
int, optional, defaults to 224) β The size (resolution) of each image.patch_size (
int, optional, defaults to 16) β The size (resolution) of each patch.num_channels (
int, optional, defaults to 3) β The number of input channels.qkv_bias (
bool, optional, defaults toTrue) β Whether to add a bias to the queries, keys and values.
This is the configuration class to store the configuration of a FlavaTextModel. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
>>> from transformers import FlavaTextConfig, FlavaTextModel
>>> # Initializing a FlavaTextModel with style configuration
>>> configuration = FlavaTextConfig()
>>> # Initializing a FlavaTextModel model (with random weights) from the style configuration
>>> model = FlavaTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configFlavaImageConfig
class transformers.FlavaImageConfig
( hidden_size: int = 768num_hidden_layers: int = 12num_attention_heads: int = 12intermediate_size: int = 3072hidden_act: int = 'gelu'hidden_dropout_prob: float = 0.0attention_probs_dropout_prob: float = 0.0initializer_range: float = 0.02layer_norm_eps: float = 1e-12image_size: int = 224patch_size: int = 16num_channels: int = 3qkv_bias: bool = Truemask_token: bool = Truevocab_size: int = 8192**kwargs )
Parameters
hidden_size (
int, optional, defaults to 768) β Dimensionality of the encoder layers and the pooler layer.num_hidden_layers (
int, optional, defaults to 12) β Number of hidden layers in the Transformer encoder.num_attention_heads (
int, optional, defaults to 12) β Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int, optional, defaults to 3072) β Dimensionality of the βintermediateβ (i.e., feed-forward) layer in the Transformer encoder.hidden_act (
strorfunction, optional, defaults to"gelu") β The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu","relu","selu"and"gelu_new"are supported.hidden_dropout_prob (
float, optional, defaults to 0.1) β The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.attention_probs_dropout_prob (
float, optional, defaults to 0.1) β The dropout ratio for the attention probabilities.initializer_range (
float, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.layer_norm_eps (
float, optional, defaults to 1e-12) β The epsilon used by the layer normalization layers.image_size (
int, optional, defaults to 224) β The size (resolution) of each image.patch_size (
int, optional, defaults to 16) β The size (resolution) of each patch.num_channels (
int, optional, defaults to 3) β The number of input channels.qkv_bias (
bool, optional, defaults toTrue) β Whether to add a bias to the queries, keys and values.mask_token (
bool, optional, defaults toTrue) β Whether to use a mask token or not. Used in MIM (Masked Image Modeling) loss for FLAVA.vocab_size (
int, optional, defaults to 8192) β Vocabulary size of the FlavaImageCodebook used in conjunction with FlavaImageModel for MIM (Masked Image Modeling) loss for FLAVA.
This is the configuration class to store the configuration of a FlavaImageModel. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
>>> from transformers import FlavaImageConfig, FlavaImageModel
>>> # Initializing a FlavaImageModel with style configuration
>>> configuration = FlavaImageConfig()
>>> # Initializing a FlavaImageModel model (with random weights) from the style configuration
>>> model = FlavaImageModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configFlavaMultimodalConfig
class transformers.FlavaMultimodalConfig
( hidden_size: int = 768num_hidden_layers: int = 6num_attention_heads: int = 12intermediate_size: int = 3072hidden_act: int = 'gelu'hidden_dropout_prob: int = 0.0attention_probs_dropout_prob: int = 0.0initializer_range: float = 0.02layer_norm_eps: float = 1e-12qkv_bias: bool = Trueuse_cls_token: bool = True**kwargs )
Parameters
hidden_size (
int, optional, defaults to 768) β Dimensionality of the encoder layers and the pooler layer.num_hidden_layers (
int, optional, defaults to 12) β Number of hidden layers in the Transformer encoder.num_attention_heads (
int, optional, defaults to 12) β Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int, optional, defaults to 3072) β Dimensionality of the βintermediateβ (i.e., feed-forward) layer in the Transformer encoder.hidden_act (
strorfunction, optional, defaults to"gelu") β The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu","relu","selu"and"gelu_new"are supported.hidden_dropout_prob (
float, optional, defaults to 0.1) β The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.attention_probs_dropout_prob (
float, optional, defaults to 0.1) β The dropout ratio for the attention probabilities.initializer_range (
float, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.layer_norm_eps (
float, optional, defaults to 1e-12) β The epsilon used by the layer normalization layers.qkv_bias (
bool, optional, defaults toTrue) β Whether to add a bias to the queries, keys and values.use_cls_token (
bool, optional, defaults toTrue) β Whether to use an extra CLS token for multimodal settings. Usually needed by the FLAVA model.
This is the configuration class to store the configuration of a FlavaMultimodalModel. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
>>> from transformers import FlavaMultimodalConfig, FlavaMultimodalModel
>>> # Initializing a FlavaMultimodalModel with style configuration
>>> configuration = FlavaMultimodalConfig()
>>> # Initializing a FlavaMultimodalModel model (with random weights) from the style configuration
>>> model = FlavaMultimodalModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configFlavaImageCodebookConfig
class transformers.FlavaImageCodebookConfig
( num_groups: int = 4input_channels: int = 3num_blocks_per_group: int = 2hidden_size: int = 256vocab_size: int = 8192freeze: int = Trueinitializer_range: float = 0.02**kwargs )
FlavaProcessor
class transformers.FlavaProcessor
( image_processor = Nonetokenizer = None**kwargs )
Parameters
image_processor (FlavaImageProcessor) β The image processor is a required input.
tokenizer (BertTokenizerFast) β The tokenizer is a required input.
Constructs a FLAVA processor which wraps a FLAVA image processor and a FLAVA tokenizer into a single processor.
FlavaProcessor offers all the functionalities of FlavaImageProcessor and BertTokenizerFast. See the __call__() and decode() for more information.
batch_decode
( *args**kwargs )
This method forwards all its arguments to BertTokenizerFastβs batch_decode(). Please refer to the docstring of this method for more information.
decode
( *args**kwargs )
This method forwards all its arguments to BertTokenizerFastβs decode(). Please refer to the docstring of this method for more information.
FlavaFeatureExtractor
class transformers.FlavaFeatureExtractor
( *args**kwargs )
FlavaImageProcessor
class transformers.FlavaImageProcessor
( do_resize: bool = Truesize: typing.Dict[str, int] = Noneresample: Resampling = <Resampling.BICUBIC: 3>do_center_crop: bool = Truecrop_size: typing.Dict[str, int] = Nonedo_rescale: bool = Truerescale_factor: typing.Union[int, float] = 0.00392156862745098do_normalize: bool = Trueimage_mean: typing.Union[float, typing.Iterable[float], NoneType] = Noneimage_std: typing.Union[float, typing.Iterable[float], NoneType] = Nonereturn_image_mask: bool = Falseinput_size_patches: int = 14total_mask_patches: int = 75mask_group_min_patches: int = 16mask_group_max_patches: typing.Optional[int] = Nonemask_group_min_aspect_ratio: float = 0.3mask_group_max_aspect_ratio: typing.Optional[float] = Nonereturn_codebook_pixels: bool = Falsecodebook_do_resize: bool = Truecodebook_size: bool = Nonecodebook_resample: int = <Resampling.LANCZOS: 1>codebook_do_center_crop: bool = Truecodebook_crop_size: int = Nonecodebook_do_rescale: bool = Truecodebook_rescale_factor: typing.Union[int, float] = 0.00392156862745098codebook_do_map_pixels: bool = Truecodebook_do_normalize: bool = Truecodebook_image_mean: typing.Union[float, typing.Iterable[float], NoneType] = Nonecodebook_image_std: typing.Union[float, typing.Iterable[float], NoneType] = None**kwargs )
Parameters
do_resize (
bool, optional, defaults toTrue) β Whether to resize the imageβs (height, width) dimensions to the specifiedsize. Can be overridden by thedo_resizeparameter inpreprocess.size (
Dict[str, int]optional, defaults to{"height" -- 224, "width": 224}): Size of the image after resizing. Can be overridden by thesizeparameter inpreprocess.resample (
PILImageResampling, optional, defaults toPILImageResampling.BICUBIC) β Resampling filter to use if resizing the image. Can be overridden by theresampleparameter inpreprocess.do_center_crop (
bool, optional, defaults toTrue) β Whether to center crop the images. Can be overridden by thedo_center_cropparameter inpreprocess.crop_size (
Dict[str, int]optional, defaults to{"height" -- 224, "width": 224}): Size of image after the center crop(crop_size["height"], crop_size["width"]). Can be overridden by thecrop_sizeparameter inpreprocess.do_rescale (
bool, optional, defaults toTrue) β Whether to rescale the image by the specified scalerescale_factor. Can be overridden by thedo_rescaleparameter inpreprocess.rescale_factor (
intorfloat, optional, defaults to1/255) β Scale factor to use if rescaling the image. Can be overridden by therescale_factorparameter inpreprocess.do_normalize (
bool, optional, defaults toTrue) β Whether to normalize the image. Can be overridden by thedo_normalizeparameter inpreprocess.image_mean (
floatorList[float], optional, defaults toIMAGENET_STANDARD_MEAN) β Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_meanparameter in thepreprocessmethod.image_std (
floatorList[float], optional, defaults toIMAGENET_STANDARD_STD) β Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_stdparameter in thepreprocessmethod.return_image_mask (
bool, optional, defaults toFalse) β Whether to return the image mask. Can be overridden by thereturn_image_maskparameter inpreprocess.input_size_patches (
int, optional, defaults to 14) β Number of patches in the image in height and width direction. 14x14 = 196 total patches. Can be overridden by theinput_size_patchesparameter inpreprocess.total_mask_patches (
int, optional, defaults to 75) β Total number of patches that should be masked. Can be overridden by thetotal_mask_patchesparameter inpreprocess.mask_group_min_patches (
int, optional, defaults to 16) β Minimum number of patches that should be masked. Can be overridden by themask_group_min_patchesparameter inpreprocess.mask_group_max_patches (
int, optional) β Maximum number of patches that should be masked. Can be overridden by themask_group_max_patchesparameter inpreprocess.mask_group_min_aspect_ratio (
float, optional, defaults to 0.3) β Minimum aspect ratio of the mask window. Can be overridden by themask_group_min_aspect_ratioparameter inpreprocess.mask_group_max_aspect_ratio (
float, optional) β Maximum aspect ratio of the mask window. Can be overridden by themask_group_max_aspect_ratioparameter inpreprocess.codebook_do_resize (
bool, optional, defaults toTrue) β Whether to resize the input for codebook to a certain. Can be overridden by thecodebook_do_resizeparameter inpreprocess.codebook_size.codebook_size (
Dict[str, int], optional, defaults to{"height" -- 224, "width": 224}): Resize the input for codebook to the given size. Can be overridden by thecodebook_sizeparameter inpreprocess.codebook_resample (
PILImageResampling, optional, defaults toPILImageResampling.LANCZOS) β Resampling filter to use if resizing the codebook image. Can be overridden by thecodebook_resampleparameter inpreprocess.codebook_do_center_crop (
bool, optional, defaults toTrue) β Whether to crop the input for codebook at the center. If the input size is smaller thancodebook_crop_sizealong any edge, the image is padded with 0βs and then center cropped. Can be overridden by thecodebook_do_center_cropparameter inpreprocess.codebook_crop_size (
Dict[str, int], optional, defaults to{"height" -- 224, "width": 224}): Desired output size for codebook input when applying center-cropping. Can be overridden by thecodebook_crop_sizeparameter inpreprocess.codebook_do_rescale (
bool, optional, defaults toTrue) β Whether to rescale the input for codebook by the specified scalecodebook_rescale_factor. Can be overridden by thecodebook_do_rescaleparameter inpreprocess.codebook_rescale_factor (
intorfloat, optional, defaults to1/255) β Defines the scale factor to use if rescaling the codebook image. Can be overridden by thecodebook_rescale_factorparameter inpreprocess.codebook_do_map_pixels (
bool, optional, defaults toTrue) β Whether to map the pixel values of the codebook input to (1 - 2e)x + e. Can be overridden by thecodebook_do_map_pixelsparameter inpreprocess.codebook_do_normalize (
bool, optional, defaults toTrue) β Whether or not to normalize the input for codebook withcodebook_image_meanandcodebook_image_std. Can be overridden by thecodebook_do_normalizeparameter inpreprocess.codebook_image_mean (
Optional[Union[float, Iterable[float]]], optional, defaults to[0, 0, 0]) β The sequence of means for each channel, to be used when normalizing images for codebook. Can be overridden by thecodebook_image_meanparameter inpreprocess.codebook_image_std (
Optional[Union[float, Iterable[float]]], optional, defaults to[0.5, 0.5, 0.5]) β The sequence of standard deviations for each channel, to be used when normalizing images for codebook. Can be overridden by thecodebook_image_stdparameter inpreprocess.
Constructs a Flava image processor.
preprocess
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]do_resize: typing.Optional[bool] = Nonesize: typing.Dict[str, int] = Noneresample: Resampling = Nonedo_center_crop: typing.Optional[bool] = Nonecrop_size: typing.Union[typing.Dict[str, int], NoneType] = Nonedo_rescale: typing.Optional[bool] = Nonerescale_factor: typing.Optional[float] = Nonedo_normalize: typing.Optional[bool] = Noneimage_mean: typing.Union[float, typing.List[float], NoneType] = Noneimage_std: typing.Union[float, typing.List[float], NoneType] = Nonereturn_image_mask: typing.Optional[bool] = Noneinput_size_patches: typing.Optional[int] = Nonetotal_mask_patches: typing.Optional[int] = Nonemask_group_min_patches: typing.Optional[int] = Nonemask_group_max_patches: typing.Optional[int] = Nonemask_group_min_aspect_ratio: typing.Optional[float] = Nonemask_group_max_aspect_ratio: typing.Optional[float] = Nonereturn_codebook_pixels: typing.Optional[bool] = Nonecodebook_do_resize: typing.Optional[bool] = Nonecodebook_size: typing.Union[typing.Dict[str, int], NoneType] = Nonecodebook_resample: typing.Optional[int] = Nonecodebook_do_center_crop: typing.Optional[bool] = Nonecodebook_crop_size: typing.Union[typing.Dict[str, int], NoneType] = Nonecodebook_do_rescale: typing.Optional[bool] = Nonecodebook_rescale_factor: typing.Optional[float] = Nonecodebook_do_map_pixels: typing.Optional[bool] = Nonecodebook_do_normalize: typing.Optional[bool] = Nonecodebook_image_mean: typing.Optional[typing.Iterable[float]] = Nonecodebook_image_std: typing.Optional[typing.Iterable[float]] = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonedata_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None**kwargs )
Parameters
images (
ImageInput) β Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False.do_resize (
bool, optional, defaults toself.do_resize) β Whether to resize the image.size (
Dict[str, int], optional, defaults toself.size) β Size of the image.resample (
int, optional, defaults toself.resample) β Resampling filter to use if resizing the image. This can be one of the enumPILImageResampling, Only has an effect ifdo_resizeis set toTrue.do_center_crop (
bool, optional, defaults toself.do_center_crop) β Whether to center crop the image.crop_size (
Dict[str, int], optional, defaults toself.crop_size) β Size of the center crop. Only has an effect ifdo_center_cropis set toTrue.do_rescale (
bool, optional, defaults toself.do_rescale) β Whether to rescale the image values between [0 - 1].rescale_factor (
float, optional, defaults toself.rescale_factor) β Rescale factor to rescale the image by ifdo_rescaleis set toTrue.do_normalize (
bool, optional, defaults toself.do_normalize) β Whether to normalize the image.image_mean (
floatorList[float], optional, defaults toself.image_mean) β Image mean.image_std (
floatorList[float], optional, defaults toself.image_std) β Image standard deviation.return_image_mask (
bool, optional, defaults toself.return_image_mask) β Whether to return the image mask.input_size_patches (
int, optional, defaults toself.input_size_patches) β Size of the patches to extract from the image.total_mask_patches (
int, optional, defaults toself.total_mask_patches) β Total number of patches to extract from the image.mask_group_min_patches (
int, optional, defaults toself.mask_group_min_patches) β Minimum number of patches to extract from the image.mask_group_max_patches (
int, optional, defaults toself.mask_group_max_patches) β Maximum number of patches to extract from the image.mask_group_min_aspect_ratio (
float, optional, defaults toself.mask_group_min_aspect_ratio) β Minimum aspect ratio of the patches to extract from the image.mask_group_max_aspect_ratio (
float, optional, defaults toself.mask_group_max_aspect_ratio) β Maximum aspect ratio of the patches to extract from the image.return_codebook_pixels (
bool, optional, defaults toself.return_codebook_pixels) β Whether to return the codebook pixels.codebook_do_resize (
bool, optional, defaults toself.codebook_do_resize) β Whether to resize the codebook pixels.codebook_size (
Dict[str, int], optional, defaults toself.codebook_size) β Size of the codebook pixels.codebook_resample (
int, optional, defaults toself.codebook_resample) β Resampling filter to use if resizing the codebook pixels. This can be one of the enumPILImageResampling, Only has an effect ifcodebook_do_resizeis set toTrue.codebook_do_center_crop (
bool, optional, defaults toself.codebook_do_center_crop) β Whether to center crop the codebook pixels.codebook_crop_size (
Dict[str, int], optional, defaults toself.codebook_crop_size) β Size of the center crop of the codebook pixels. Only has an effect ifcodebook_do_center_cropis set toTrue.codebook_do_rescale (
bool, optional, defaults toself.codebook_do_rescale) β Whether to rescale the codebook pixels values between [0 - 1].codebook_rescale_factor (
float, optional, defaults toself.codebook_rescale_factor) β Rescale factor to rescale the codebook pixels by ifcodebook_do_rescaleis set toTrue.codebook_do_map_pixels (
bool, optional, defaults toself.codebook_do_map_pixels) β Whether to map the codebook pixels values.codebook_do_normalize (
bool, optional, defaults toself.codebook_do_normalize) β Whether to normalize the codebook pixels.codebook_image_mean (
floatorList[float], optional, defaults toself.codebook_image_mean) β Codebook pixels mean to normalize the codebook pixels by ifcodebook_do_normalizeis set toTrue.codebook_image_std (
floatorList[float], optional, defaults toself.codebook_image_std) β Codebook pixels standard deviation to normalize the codebook pixels by ifcodebook_do_normalizeis set toTrue.return_tensors (
strorTensorType, optional) β The type of tensors to return. Can be one of:Unset: Return a list of
np.ndarray.TensorType.TENSORFLOWor'tf': Return a batch of typetf.Tensor.TensorType.PYTORCHor'pt': Return a batch of typetorch.Tensor.TensorType.NUMPYor'np': Return a batch of typenp.ndarray.TensorType.JAXor'jax': Return a batch of typejax.numpy.ndarray.
data_format (
ChannelDimensionorstr, optional, defaults toChannelDimension.FIRST) β The channel dimension format for the output image. Can be one of:ChannelDimension.FIRST: image in (num_channels, height, width) format.ChannelDimension.LAST: image in (height, width, num_channels) format.
input_data_format (
ChannelDimensionorstr, optional) β The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:"channels_first"orChannelDimension.FIRST: image in (num_channels, height, width) format."channels_last"orChannelDimension.LAST: image in (height, width, num_channels) format."none"orChannelDimension.NONE: image in (height, width) format.
Preprocess an image or batch of images.
FlavaForPreTraining
class transformers.FlavaForPreTraining
( config: FlavaConfigimage_codebook: typing.Optional[torch.nn.modules.module.Module] = None )
Parameters
config (FlavaConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
image_codebook (
nn.Module) β If passed, the image codebook will be set to this. Otherwise. it will be initialized using the image_codebook_config defined in the config first as the first parameter.
The FLAVA model for pretraining which outputs losses, embeddings, logits and transformer outputs.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( input_ids: typing.Optional[torch.LongTensor] = Noneinput_ids_masked: typing.Optional[torch.LongTensor] = Nonepixel_values: typing.Optional[torch.FloatTensor] = Nonecodebook_pixel_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Noneimage_attention_mask: typing.Optional[torch.Tensor] = Noneskip_unmasked_multimodal_encoder: bool = Nonemlm_labels: typing.Optional[torch.Tensor] = Nonemim_labels: typing.Optional[torch.Tensor] = Noneitm_labels: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: bool = Truereturn_dict: typing.Optional[bool] = Nonereturn_loss: typing.Optional[bool] = None ) β transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids_masked (
torch.LongTensorof shape(batch_size, text_seq_len)) β Indices of input sequence tokens in the vocabulary. These ones are the masked version of the original task to be used with MLM. Indices can be obtained using AutoTokenizer along withDataCollatorForMaskedLanguageModeling. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?input_ids (
torch.LongTensorof shape(batch_size, text_seq_len)) β Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?token_type_ids (
torch.LongTensorof shape(batch_size, text_seq_len), optional) β Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:0 corresponds to a sentence A token,
1 corresponds to a sentence B token. What are token type IDs?
pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, height, width)) β Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details.bool_masked_pos (
torch.BoolTensorof shape(batch_size, image_num_patches)) β Boolean masked positions. Indicates which patches are masked (1) and which arenβt (0).interpolate_pos_encoding (
bool, optional) β Whether to interpolate the pre-trained position encodings.image_attention_mask (
torch.FloatTensorof shape(batch_size, image_num_patches), optional) β Mask to avoid performing attention on padding token indices specifically for images. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
skip_unmasked_multimodal_encoder (bool, optional) β Skip any calculations for multimodal encoder for unmasked inputs. FLAVA pretraining doesnβt need unmasked multimodal embeddings or outputs as of now.
mlm_labels (
torch.LongTensorof shape(batch_size, text_seq_len), optional) β Labels for computing the left-to-right language and multimodal masked modeling loss (next word prediction). Indices should be in[-100, 0, ..., text_config.vocab_size - 1](seeinput_idsdocstring). Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., text_config.vocab_size - 1].mim_labels (
torch.LongTensorof shape(batch_size, image_num_patches), optional) β Labels for computing the image and multimodal masked modeling loss. Indices should be in[-100, 0, ..., image_config.vocab_size - 1]. Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., image_config.vocab_size - 1]. If not passed, they are generated automatically using the image codebook assigned to the model. By default, it uses FlavaImageCodebook. See FlavaImageCodebook to understand how to generate mim_labels.itm_labels (
torch.LongTensorof shape(batch_size, 1), optional) β Labels for computing the image-text matching loss. 0 means the pairs donβt match and 1 means they match. The pairs with 0 will be skipped for calculation of MMM and global contrastive losses as well.return_loss (
bool, optional, default to None) β Whether to return calculated loss or not.attention_mask (
torch.FloatTensorof shape(batch_size, text_seq_len), optional) β Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.Examples β
Returns
transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs.
loss (
torch.FloatTensor, optional, returned whenreturn_lossis True) β Total loss calculated for this model.loss_info (
FlavaLosses) β Detailed info for FLAVA Pretraining losses. CheckFlavaLossesclass description for the information on the keys.image_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned whenpixel_valuesare present) β The image embeddings which are basically the pooled output of FlavaImageModel.image_output (
BaseModelOutputWithPooling, optional, returned whenpixel_valuesare present) β The output of the FlavaImageModel.text_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned wheninput_idsare present) β The text embeddings which are basically the pooled output of FlavaTextModel.text_output (
BaseModelOutputWithPooling, optional, returned wheninput_idsare present) β The output of the FlavaTextModel.multimodal_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned wheninput_idsandpixel_valuesare present andskip_unmasked_multimodal_encoderisNoneorFalse) β The multimodal embeddings which are basically the pooled output of FlavaTextModel.multimodal_output (
BaseModelOutputWithPooling, returned wheninput_idsandpixel_valuesare present andskip_unmasked_multimodal_encoderisNoneorFalse) β The output of the FlavaMultimodalModel.image_masked_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned whenpixel_valuesare present) β The image embeddings which are basically the pooled output of FlavaImageModel. Usesbool_masked_posto create masked images.image_masked_output (
BaseModelOutputWithPooling, optional, returned whenpixel_valuesare present) β The output of the FlavaImageModel. Usesbool_masked_posto create masked images.text_masked_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned wheninput_ids_maskedare present) β The text embeddings which are basically the pooled output of FlavaTextModel.text_masked_output (
BaseModelOutputWithPooling, optional, returned wheninput_ids_maskedare present) β The output of the FlavaTextModel.multimodal_masked_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned wheninput_idsandpixel_valuesare present) β The multimodal embeddings which are basically the pooled output of FlavaTextModel.multimodal_masked_output (
BaseModelOutputWithPooling, returned wheninput_ids_maskedandpixel_valuesare present) β The output of the FlavaMultimodalModel.mim_logits (
torch.FloatTensorof shape(batch_size, num_image_patches, image_vocab_size)or of shape(total_masked_patches, image_vocab_size), optional, returned whenpixel_valuesare present andinput_ids_maskedare not) β The logits for MIM unimodal loss. Usesbook_masked_posto get masked patches. The flattened output is returned whenbool_masked_poshas some of the patches masked.mlm_logits (
torch.FloatTensorof shape(batch_size, text_seq_length, text_vocab_size)or of shape(total_masked_seq_length, text_vocab_size), optional, returned wheninput_ids_maskedare present andpixel_valuesare not) β The logits for MLM unimodal loss. The flattened output is returned wheninput_ids_maskedhas some of the tokens masked.itm_logits (
torch.FloatTensorof shape(batch_size, 2), optional, returned wheninput_ids_maskedandpixel_valuesare present) β The logits for ITM loss. Note that ITM loss is calculated on masked pairs in FLAVA.mmm_image_logits (
torch.FloatTensorof shape(batch_size, num_image_patches, image_vocab_size)or of shape(total_masked_patches, image_vocab_size), optional, returned whenpixel_valuesandinput_ids_maskedare present) β The logits for MMM image multimodal loss. Usesbook_masked_posto get masked patches. The flattened output is returned whenbool_masked_poshas some of the patches masked.mmm_text_logits (
torch.FloatTensorof shape(batch_size, text_seq_length, text_vocab_size)or of shape((total_masked_seq_length, text_vocab_size)), *optional*, returned whenpixel_valuesandinput_ids_maskedare present) -- The logits for MMM text multimodal loss. The flattened output is returned wheninput_ids_masked` has some of the tokens masked.contrastive_logits_per_image (
torch.FloatTensorof shape(image_batch_size, text_batch_size)) β The scaled dot product scores betweenimage_embeddingsandtext_embeddingsbut passed through FLAVAβsimage_projectionandtext_projectionlayers respectively. This represents the image-text similarity scores. This is calculated on unmasked images and texts.contrastive_logits_per_text (
torch.FloatTensorof shape(text_batch_size, image_batch_size)) β The scaled dot product scores betweentext_embeddingsandimage_embeddingsbut passed through FLAVAβstext_projectionandimage_projectionlayers respectively. This is calculated on unmasked images and texts.
The FlavaForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
FlavaModel
class transformers.FlavaModel
( config: FlavaConfig )
Parameters
config (FlavaConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( input_ids: typing.Optional[torch.LongTensor] = Nonepixel_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Noneimage_attention_mask: typing.Optional[torch.Tensor] = Noneskip_multimodal_encoder: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: bool = Truereturn_dict: typing.Optional[bool] = None ) β transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, height, width)) β Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details.bool_masked_pos (
torch.BoolTensorof shape(batch_size, image_num_patches)) β Boolean masked positions. Indicates which patches are masked (1) and which arenβt (0).interpolate_pos_encoding (
bool, optional) β Whether to interpolate the pre-trained position encodings.input_ids (
torch.LongTensorof shape(batch_size, image_num_patches + text_seq_len)) β Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?token_type_ids (
torch.LongTensorof shape(batch_size, image_num_patches + text_seq_len), optional) β Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:0 corresponds to a sentence A token,
1 corresponds to a sentence B token. What are token type IDs?
attention_mask (
torch.FloatTensorof shape(batch_size, image_num_patches + text_seq_len), optional) β Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.skip_multimodal_encoder (bool, optional) β Skip any calculations for multimodal encoder. Useful if multimodal encoding is not going to be used.
Returns
transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor)
A transformers.models.flava.modeling_flava.FlavaModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs.
image_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned whenpixel_valuesare present) β The image embeddings which are basically the pooled output of FlavaImageModel.image_output (
BaseModelOutputWithPooling, optional, returned whenpixel_valuesare present) β The output of the FlavaImageModel.text_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned wheninput_idsare present) β The text embeddings which are basically the pooled output of FlavaTextModel.text_output (
BaseModelOutputWithPooling, optional, returned wheninput_idsare present) β The output of the FlavaTextModel.multimodal_embeddings (
torch.FloatTensorof shape(batch_size, output_dim), optional, returned wheninput_idsandpixel_valuesare present andskip_multimodal_encoderisNoneorFalse) β The multimodal embeddings which are basically the pooled output of FlavaTextModel.multimodal_output (
BaseModelOutputWithPooling, returned wheninput_idsandpixel_valuesare present andskip_multimodal_encoderisNoneorFalse) β The output of the FlavaMultimodalModel.
The FlavaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, FlavaModel
>>> model = FlavaModel.from_pretrained("facebook/flava-full")
>>> processor = AutoProcessor.from_pretrained("facebook/flava-full")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.contrastive_logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilitiesget_text_features
( input_ids: typing.Optional[torch.Tensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None )
Parameters
input_ids (
torch.LongTensorof shape(batch_size, text_seq_length)) β Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?token_type_ids (
torch.LongTensorof shape(batch_size, text_seq_length), optional) β Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:0 corresponds to a sentence A token,
1 corresponds to a sentence B token. What are token type IDs?
attention_mask (
torch.FloatTensorof shape(batch_size, text_seq_length), optional) β Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.
The FlavaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
get_image_features
( pixel_values: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.BoolTensor] = Noneinterpolate_pos_encoding: typing.Optional[bool] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None )
Parameters
pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, height, width)) β Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details.bool_masked_pos (
torch.BoolTensorof shape(batch_size, image_num_patches)) β Boolean masked positions. Indicates which patches are masked (1) and which arenβt (0).interpolate_pos_encoding (
bool, optional) β Whether to interpolate the pre-trained position encodings.attention_mask (
torch.FloatTensorof shape(batch_size, image_num_patches), optional) β Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.
The FlavaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
FlavaImageCodebook
class transformers.FlavaImageCodebook
( config: FlavaImageCodebookConfig**kwargs: typing.Any )
Parameters
config (FlavaImageCodebookConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The FLAVAβs image codebook model inspired from DALL-Eβs original encoder. Outputs raw hidden states and can be used to generate image tokens for an image based on DALL-Eβs vocab. Used to generate labels for MIM. Use get_codebook_indices to get image tokens for an image.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( pixel_values: FloatTensor )
get_codebook_indices
( pixel_values: Tensor )
get_codebook_probs
( pixel_values: Tensor )
FlavaTextModel
class transformers.FlavaTextModel
( config: FlavaTextConfigadd_pooling_layer: bool = True )
Parameters
config (FlavaTextConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Text Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( input_ids: typing.Optional[torch.Tensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) β transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (
torch.LongTensorof shape(batch_size, text_seq_length)) β Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?token_type_ids (
torch.LongTensorof shape(batch_size, text_seq_length), optional) β Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:0 corresponds to a sentence A token,
1 corresponds to a sentence B token. What are token type IDs?
attention_mask (
torch.FloatTensorof shape(batch_size, text_seq_length), optional) β Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlavaTextConfig) and inputs.
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) β Sequence of hidden-states at the output of the last layer of the model.pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) β Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlavaTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
>>> from transformers import AutoTokenizer, FlavaTextModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full")
>>> model = FlavaTextModel.from_pretrained("facebook/flava-full")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_stateFlavaImageModel
class transformers.FlavaImageModel
( config: FlavaImageConfigadd_pooling_layer: bool = True )
Parameters
config (FlavaImageConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Image Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( pixel_values: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.BoolTensor] = Noneinterpolate_pos_encoding: typing.Optional[bool] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) β transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, height, width)) β Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details.bool_masked_pos (
torch.BoolTensorof shape(batch_size, image_num_patches)) β Boolean masked positions. Indicates which patches are masked (1) and which arenβt (0).interpolate_pos_encoding (
bool, optional) β Whether to interpolate the pre-trained position encodings.attention_mask (
torch.FloatTensorof shape(batch_size, image_num_patches), optional) β Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlavaImageConfig) and inputs.
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) β Sequence of hidden-states at the output of the last layer of the model.pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) β Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlavaImageModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
>>> from transformers import AutoImageProcessor, FlavaImageModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("boincai/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("facebook/flava-full")
>>> model = FlavaImageModel.from_pretrained("facebook/flava-full")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 197, 768]FlavaMultimodalModel
class transformers.FlavaMultimodalModel
( config: FlavaMultimodalConfigadd_pooling_layer = True )
Parameters
config (FlavaMultimodalConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Multimodal Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( hidden_states: Tensorattention_mask: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) β transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
hidden_states (
torch.FloatTensorof shape(batch_size, image_num_patches + text_seq_len, hidden_size)) β The concatenated hidden states of unimodal encoders.attention_mask (
torch.FloatTensorof shape(batch_size, image_num_patches + text_seq_len), optional) β Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:1 for tokens that are not masked,
0 for tokens that are masked. What are attention masks?
head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (
bool, optional) β Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail.output_hidden_states (
bool, optional) β Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail.return_dict (
bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlavaMultimodalConfig) and inputs.
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) β Sequence of hidden-states at the output of the last layer of the model.pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) β Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) β Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) β Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlavaMultimodalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
>>> from transformers import AutoTokenizer, FlavaMultimodalModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full")
>>> model = FlavaMultimodalModel.from_pretrained("facebook/flava-full")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_stateLast updated