Transformers
  • 🌍GET STARTED
    • Transformers
    • Quick tour
    • Installation
  • 🌍TUTORIALS
    • Run inference with pipelines
    • Write portable code with AutoClass
    • Preprocess data
    • Fine-tune a pretrained model
    • Train with a script
    • Set up distributed training with BOINC AI Accelerate
    • Load and train adapters with BOINC AI PEFT
    • Share your model
    • Agents
    • Generation with LLMs
  • 🌍TASK GUIDES
    • 🌍NATURAL LANGUAGE PROCESSING
      • Text classification
      • Token classification
      • Question answering
      • Causal language modeling
      • Masked language modeling
      • Translation
      • Summarization
      • Multiple choice
    • 🌍AUDIO
      • Audio classification
      • Automatic speech recognition
    • 🌍COMPUTER VISION
      • Image classification
      • Semantic segmentation
      • Video classification
      • Object detection
      • Zero-shot object detection
      • Zero-shot image classification
      • Depth estimation
    • 🌍MULTIMODAL
      • Image captioning
      • Document Question Answering
      • Visual Question Answering
      • Text to speech
    • 🌍GENERATION
      • Customize the generation strategy
    • 🌍PROMPTING
      • Image tasks with IDEFICS
  • 🌍DEVELOPER GUIDES
    • Use fast tokenizers from BOINC AI Tokenizers
    • Run inference with multilingual models
    • Use model-specific APIs
    • Share a custom model
    • Templates for chat models
    • Run training on Amazon SageMaker
    • Export to ONNX
    • Export to TFLite
    • Export to TorchScript
    • Benchmarks
    • Notebooks with examples
    • Community resources
    • Custom Tools and Prompts
    • Troubleshoot
  • 🌍PERFORMANCE AND SCALABILITY
    • Overview
    • 🌍EFFICIENT TRAINING TECHNIQUES
      • Methods and tools for efficient training on a single GPU
      • Multiple GPUs and parallelism
      • Efficient training on CPU
      • Distributed CPU training
      • Training on TPUs
      • Training on TPU with TensorFlow
      • Training on Specialized Hardware
      • Custom hardware for training
      • Hyperparameter Search using Trainer API
    • 🌍OPTIMIZING INFERENCE
      • Inference on CPU
      • Inference on one GPU
      • Inference on many GPUs
      • Inference on Specialized Hardware
    • Instantiating a big model
    • Troubleshooting
    • XLA Integration for TensorFlow Models
    • Optimize inference using `torch.compile()`
  • 🌍CONTRIBUTE
    • How to contribute to transformers?
    • How to add a model to BOINC AI Transformers?
    • How to convert a BOINC AI Transformers model to TensorFlow?
    • How to add a pipeline to BOINC AI Transformers?
    • Testing
    • Checks on a Pull Request
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Glossary
    • What BOINC AI Transformers can do
    • How BOINC AI Transformers solve tasks
    • The Transformer model family
    • Summary of the tokenizers
    • Attention mechanisms
    • Padding and truncation
    • BERTology
    • Perplexity of fixed-length models
    • Pipelines for webserver inference
    • Model training anatomy
  • 🌍API
    • 🌍MAIN CLASSES
      • Agents and Tools
      • 🌍Auto Classes
        • Extending the Auto Classes
        • AutoConfig
        • AutoTokenizer
        • AutoFeatureExtractor
        • AutoImageProcessor
        • AutoProcessor
        • Generic model classes
          • AutoModel
          • TFAutoModel
          • FlaxAutoModel
        • Generic pretraining classes
          • AutoModelForPreTraining
          • TFAutoModelForPreTraining
          • FlaxAutoModelForPreTraining
        • Natural Language Processing
          • AutoModelForCausalLM
          • TFAutoModelForCausalLM
          • FlaxAutoModelForCausalLM
          • AutoModelForMaskedLM
          • TFAutoModelForMaskedLM
          • FlaxAutoModelForMaskedLM
          • AutoModelForMaskGenerationge
          • TFAutoModelForMaskGeneration
          • AutoModelForSeq2SeqLM
          • TFAutoModelForSeq2SeqLM
          • FlaxAutoModelForSeq2SeqLM
          • AutoModelForSequenceClassification
          • TFAutoModelForSequenceClassification
          • FlaxAutoModelForSequenceClassification
          • AutoModelForMultipleChoice
          • TFAutoModelForMultipleChoice
          • FlaxAutoModelForMultipleChoice
          • AutoModelForNextSentencePrediction
          • TFAutoModelForNextSentencePrediction
          • FlaxAutoModelForNextSentencePrediction
          • AutoModelForTokenClassification
          • TFAutoModelForTokenClassification
          • FlaxAutoModelForTokenClassification
          • AutoModelForQuestionAnswering
          • TFAutoModelForQuestionAnswering
          • FlaxAutoModelForQuestionAnswering
          • AutoModelForTextEncoding
          • TFAutoModelForTextEncoding
        • Computer vision
          • AutoModelForDepthEstimation
          • AutoModelForImageClassification
          • TFAutoModelForImageClassification
          • FlaxAutoModelForImageClassification
          • AutoModelForVideoClassification
          • AutoModelForMaskedImageModeling
          • TFAutoModelForMaskedImageModeling
          • AutoModelForObjectDetection
          • AutoModelForImageSegmentation
          • AutoModelForImageToImage
          • AutoModelForSemanticSegmentation
          • TFAutoModelForSemanticSegmentation
          • AutoModelForInstanceSegmentation
          • AutoModelForUniversalSegmentation
          • AutoModelForZeroShotImageClassification
          • TFAutoModelForZeroShotImageClassification
          • AutoModelForZeroShotObjectDetection
        • Audio
          • AutoModelForAudioClassification
          • AutoModelForAudioFrameClassification
          • TFAutoModelForAudioFrameClassification
          • AutoModelForCTC
          • AutoModelForSpeechSeq2Seq
          • TFAutoModelForSpeechSeq2Seq
          • FlaxAutoModelForSpeechSeq2Seq
          • AutoModelForAudioXVector
          • AutoModelForTextToSpectrogram
          • AutoModelForTextToWaveform
        • Multimodal
          • AutoModelForTableQuestionAnswering
          • TFAutoModelForTableQuestionAnswering
          • AutoModelForDocumentQuestionAnswering
          • TFAutoModelForDocumentQuestionAnswering
          • AutoModelForVisualQuestionAnswering
          • AutoModelForVision2Seq
          • TFAutoModelForVision2Seq
          • FlaxAutoModelForVision2Seq
      • Callbacks
      • Configuration
      • Data Collator
      • Keras callbacks
      • Logging
      • Models
      • Text Generation
      • ONNX
      • Optimization
      • Model outputs
      • Pipelines
      • Processors
      • Quantization
      • Tokenizer
      • Trainer
      • DeepSpeed Integration
      • Feature Extractor
      • Image Processor
    • 🌍MODELS
      • 🌍TEXT MODELS
        • ALBERT
        • BART
        • BARThez
        • BARTpho
        • BERT
        • BertGeneration
        • BertJapanese
        • Bertweet
        • BigBird
        • BigBirdPegasus
        • BioGpt
        • Blenderbot
        • Blenderbot Small
        • BLOOM
        • BORT
        • ByT5
        • CamemBERT
        • CANINE
        • CodeGen
        • CodeLlama
        • ConvBERT
        • CPM
        • CPMANT
        • CTRL
        • DeBERTa
        • DeBERTa-v2
        • DialoGPT
        • DistilBERT
        • DPR
        • ELECTRA
        • Encoder Decoder Models
        • ERNIE
        • ErnieM
        • ESM
        • Falcon
        • FLAN-T5
        • FLAN-UL2
        • FlauBERT
        • FNet
        • FSMT
        • Funnel Transformer
        • GPT
        • GPT Neo
        • GPT NeoX
        • GPT NeoX Japanese
        • GPT-J
        • GPT2
        • GPTBigCode
        • GPTSAN Japanese
        • GPTSw3
        • HerBERT
        • I-BERT
        • Jukebox
        • LED
        • LLaMA
        • LLama2
        • Longformer
        • LongT5
        • LUKE
        • M2M100
        • MarianMT
        • MarkupLM
        • MBart and MBart-50
        • MEGA
        • MegatronBERT
        • MegatronGPT2
        • Mistral
        • mLUKE
        • MobileBERT
        • MPNet
        • MPT
        • MRA
        • MT5
        • MVP
        • NEZHA
        • NLLB
        • NLLB-MoE
        • Nyströmformer
        • Open-Llama
        • OPT
        • Pegasus
        • PEGASUS-X
        • Persimmon
        • PhoBERT
        • PLBart
        • ProphetNet
        • QDQBert
        • RAG
        • REALM
        • Reformer
        • RemBERT
        • RetriBERT
        • RoBERTa
        • RoBERTa-PreLayerNorm
        • RoCBert
        • RoFormer
        • RWKV
        • Splinter
        • SqueezeBERT
        • SwitchTransformers
        • T5
        • T5v1.1
        • TAPEX
        • Transformer XL
        • UL2
        • UMT5
        • X-MOD
        • XGLM
        • XLM
        • XLM-ProphetNet
        • XLM-RoBERTa
        • XLM-RoBERTa-XL
        • XLM-V
        • XLNet
        • YOSO
      • 🌍VISION MODELS
        • BEiT
        • BiT
        • Conditional DETR
        • ConvNeXT
        • ConvNeXTV2
        • CvT
        • Deformable DETR
        • DeiT
        • DETA
        • DETR
        • DiNAT
        • DINO V2
        • DiT
        • DPT
        • EfficientFormer
        • EfficientNet
        • FocalNet
        • GLPN
        • ImageGPT
        • LeViT
        • Mask2Former
        • MaskFormer
        • MobileNetV1
        • MobileNetV2
        • MobileViT
        • MobileViTV2
        • NAT
        • PoolFormer
        • Pyramid Vision Transformer (PVT)
        • RegNet
        • ResNet
        • SegFormer
        • SwiftFormer
        • Swin Transformer
        • Swin Transformer V2
        • Swin2SR
        • Table Transformer
        • TimeSformer
        • UperNet
        • VAN
        • VideoMAE
        • Vision Transformer (ViT)
        • ViT Hybrid
        • ViTDet
        • ViTMAE
        • ViTMatte
        • ViTMSN
        • ViViT
        • YOLOS
      • 🌍AUDIO MODELS
        • Audio Spectrogram Transformer
        • Bark
        • CLAP
        • EnCodec
        • Hubert
        • MCTCT
        • MMS
        • MusicGen
        • Pop2Piano
        • SEW
        • SEW-D
        • Speech2Text
        • Speech2Text2
        • SpeechT5
        • UniSpeech
        • UniSpeech-SAT
        • VITS
        • Wav2Vec2
        • Wav2Vec2-Conformer
        • Wav2Vec2Phoneme
        • WavLM
        • Whisper
        • XLS-R
        • XLSR-Wav2Vec2
      • 🌍MULTIMODAL MODELS
        • ALIGN
        • AltCLIP
        • BLIP
        • BLIP-2
        • BridgeTower
        • BROS
        • Chinese-CLIP
        • CLIP
        • CLIPSeg
        • Data2Vec
        • DePlot
        • Donut
        • FLAVA
        • GIT
        • GroupViT
        • IDEFICS
        • InstructBLIP
        • LayoutLM
        • LayoutLMV2
        • LayoutLMV3
        • LayoutXLM
        • LiLT
        • LXMERT
        • MatCha
        • MGP-STR
        • Nougat
        • OneFormer
        • OWL-ViT
        • Perceiver
        • Pix2Struct
        • Segment Anything
        • Speech Encoder Decoder Models
        • TAPAS
        • TrOCR
        • TVLT
        • ViLT
        • Vision Encoder Decoder Models
        • Vision Text Dual Encoder
        • VisualBERT
        • X-CLIP
      • 🌍REINFORCEMENT LEARNING MODELS
        • Decision Transformer
        • Trajectory Transformer
      • 🌍TIME SERIES MODELS
        • Autoformer
        • Informer
        • Time Series Transformer
      • 🌍GRAPH MODELS
        • Graphormer
  • 🌍INTERNAL HELPERS
    • Custom Layers and Utilities
    • Utilities for pipelines
    • Utilities for Tokenizers
    • Utilities for Trainer
    • Utilities for Generation
    • Utilities for Image Processors
    • Utilities for Audio processing
    • General Utilities
    • Utilities for Time Series
Powered by GitBook
On this page
  • FLAVA
  • Overview
  • FlavaConfig
  • FlavaTextConfig
  • FlavaImageConfig
  • FlavaMultimodalConfig
  • FlavaImageCodebookConfig
  • FlavaProcessor
  • FlavaFeatureExtractor
  • FlavaImageProcessor
  • FlavaForPreTraining
  • FlavaModel
  • FlavaImageCodebook
  • FlavaTextModel
  • FlavaImageModel
  • FlavaMultimodalModel
  1. API
  2. MODELS
  3. MULTIMODAL MODELS

FLAVA

PreviousDonutNextGIT

Last updated 1 year ago

FLAVA

Overview

The FLAVA model was proposed in by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022.

The paper aims at creating a single unified foundation model which can work across vision, language as well as vision-and-language multimodal tasks.

The abstract from the paper is the following:

State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal (with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising direction would be to use a single holistic universal model, as a “foundation”, that targets all modalities at once — a true vision and language foundation model should be good at vision tasks, language tasks, and cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate impressive performance on a wide range of 35 tasks spanning these target modalities.

This model was contributed by . The original code can be found .

FlavaConfig

class transformers.FlavaConfig

( image_config: typing.Dict[str, typing.Any] = Nonetext_config: typing.Dict[str, typing.Any] = Nonemultimodal_config: typing.Dict[str, typing.Any] = Noneimage_codebook_config: typing.Dict[str, typing.Any] = Nonehidden_size: int = 768layer_norm_eps: float = 1e-12projection_dim: int = 768init_codebook: bool = Truelogit_scale_init_value: float = 2.6592initializer_range: float = 0.02ce_ignore_index: int = -100mim_weight: float = 1.0mlm_weight: float = 1.0global_contrastive_weight: float = 1.0itm_weight: float = 1.0mmm_image_weight: float = 1.0mmm_text_weight: float = 1.0global_backprop_contrastive: bool = Trueskip_unmasked_multimodal_encoder: bool = Truereturn_loss: bool = True**kwargs )

Parameters

  • text_config (dict, optional) — Dictionary of configuration options used to initialize .

  • image_config (dict, optional) — Dictionary of configuration options used to initialize .

  • multimodal_config (dict, optional) — Dictionary of configuration options used to initialize .

  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.

  • layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.

  • projection_dim (int, optional, defaults to 512) — Dimentionality of text and image projection layers.

  • logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original FLAVA/CLIP implementation.

  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • ce_ignore_index (int, optional, defaults to -100) — Cross entropy index to ignore.

  • mim_weight (float, optional, defaults to 1.0) — Weight to be assigned to MIM (Masked Image Modeling) unimodal loss

  • mlm_weight (float, optional, defaults to 1.0) — Weight to be assigned to MLM (Masked Language Modeling) unimodal loss

  • global_contrastive_weight (float, optional, defaults to 1.0) — Weight to be assigned to global contrastive cross-alignment loss.

  • itm_weight (float, optional, defaults to 1.0) — Weight to be assigned to image-text matching multimodal loss.

  • mmm_image_weight (float, optional, defaults to 1.0) — Weight to be assigned to MMM loss’s image part.

  • mmm_text_weight (float, optional, defaults to 1.0) — Weight to be assigned to MMM loss’s text part.

  • global_backprop_contrastive (bool, optional, defaults to True) — Whether to use global backpropgation through all workers in contrastive loss.

  • skip_unmasked_multimodal_encoder (bool, optional, defaults to True) — Whether to skip running unmasked multimodal encoder whose outputs are not used by FLAVA losses.

  • return_loss (bool, optional, defaults to True) — Whether to return loss or not

  • kwargs (optional) — Dictionary of keyword arguments.

Example:

Copied

>>> from transformers import FlavaConfig, FlavaModel, FlavaForPreTraining

>>> # Initializing a FlavaConfig with style configuration
>>> configuration = FlavaConfig()

>>> # Initializing a FlavaModel and FlavaForPreTraining model (with random weights) from the style configuration
>>> model = FlavaModel(configuration)
>>> model_pre = FlavaForPreTraining(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
>>> configuration_pre = model_pre.config

from_configs

Returns

An instance of a configuration object

FlavaTextConfig

class transformers.FlavaTextConfig

( vocab_size: int = 30522type_vocab_size: int = 2max_position_embeddings: int = 512position_embedding_type: str = 'absolute'hidden_size: int = 768num_hidden_layers: int = 12num_attention_heads: int = 12intermediate_size: int = 3072hidden_act: str = 'gelu'hidden_dropout_prob: float = 0.0attention_probs_dropout_prob: float = 0.0initializer_range: float = 0.02layer_norm_eps: float = 1e-12pad_token_id: int = 0qkv_bias: bool = True**kwargs )

Parameters

  • max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). For VL, max_length passed to model is 77.

  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.

  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.

  • intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.

  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.

  • image_size (int, optional, defaults to 224) — The size (resolution) of each image.

  • patch_size (int, optional, defaults to 16) — The size (resolution) of each patch.

  • num_channels (int, optional, defaults to 3) — The number of input channels.

  • qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.

Example:

Copied

>>> from transformers import FlavaTextConfig, FlavaTextModel

>>> # Initializing a FlavaTextModel with  style configuration
>>> configuration = FlavaTextConfig()

>>> # Initializing a FlavaTextModel model (with random weights) from the style configuration
>>> model = FlavaTextModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

FlavaImageConfig

class transformers.FlavaImageConfig

( hidden_size: int = 768num_hidden_layers: int = 12num_attention_heads: int = 12intermediate_size: int = 3072hidden_act: int = 'gelu'hidden_dropout_prob: float = 0.0attention_probs_dropout_prob: float = 0.0initializer_range: float = 0.02layer_norm_eps: float = 1e-12image_size: int = 224patch_size: int = 16num_channels: int = 3qkv_bias: bool = Truemask_token: bool = Truevocab_size: int = 8192**kwargs )

Parameters

  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.

  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.

  • intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.

  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.

  • image_size (int, optional, defaults to 224) — The size (resolution) of each image.

  • patch_size (int, optional, defaults to 16) — The size (resolution) of each patch.

  • num_channels (int, optional, defaults to 3) — The number of input channels.

  • qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.

  • mask_token (bool, optional, defaults to True) — Whether to use a mask token or not. Used in MIM (Masked Image Modeling) loss for FLAVA.

Example:

Copied

>>> from transformers import FlavaImageConfig, FlavaImageModel

>>> # Initializing a FlavaImageModel with  style configuration
>>> configuration = FlavaImageConfig()

>>> # Initializing a FlavaImageModel model (with random weights) from the style configuration
>>> model = FlavaImageModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

FlavaMultimodalConfig

class transformers.FlavaMultimodalConfig

( hidden_size: int = 768num_hidden_layers: int = 6num_attention_heads: int = 12intermediate_size: int = 3072hidden_act: int = 'gelu'hidden_dropout_prob: int = 0.0attention_probs_dropout_prob: int = 0.0initializer_range: float = 0.02layer_norm_eps: float = 1e-12qkv_bias: bool = Trueuse_cls_token: bool = True**kwargs )

Parameters

  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.

  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.

  • intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.

  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.

  • qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.

  • use_cls_token (bool, optional, defaults to True) — Whether to use an extra CLS token for multimodal settings. Usually needed by the FLAVA model.

Example:

Copied

>>> from transformers import FlavaMultimodalConfig, FlavaMultimodalModel

>>> # Initializing a FlavaMultimodalModel with  style configuration
>>> configuration = FlavaMultimodalConfig()

>>> # Initializing a FlavaMultimodalModel model (with random weights) from the style configuration
>>> model = FlavaMultimodalModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

FlavaImageCodebookConfig

class transformers.FlavaImageCodebookConfig

( num_groups: int = 4input_channels: int = 3num_blocks_per_group: int = 2hidden_size: int = 256vocab_size: int = 8192freeze: int = Trueinitializer_range: float = 0.02**kwargs )

FlavaProcessor

class transformers.FlavaProcessor

( image_processor = Nonetokenizer = None**kwargs )

Parameters

Constructs a FLAVA processor which wraps a FLAVA image processor and a FLAVA tokenizer into a single processor.

batch_decode

( *args**kwargs )

decode

( *args**kwargs )

FlavaFeatureExtractor

class transformers.FlavaFeatureExtractor

( *args**kwargs )

FlavaImageProcessor

class transformers.FlavaImageProcessor

( do_resize: bool = Truesize: typing.Dict[str, int] = Noneresample: Resampling = <Resampling.BICUBIC: 3>do_center_crop: bool = Truecrop_size: typing.Dict[str, int] = Nonedo_rescale: bool = Truerescale_factor: typing.Union[int, float] = 0.00392156862745098do_normalize: bool = Trueimage_mean: typing.Union[float, typing.Iterable[float], NoneType] = Noneimage_std: typing.Union[float, typing.Iterable[float], NoneType] = Nonereturn_image_mask: bool = Falseinput_size_patches: int = 14total_mask_patches: int = 75mask_group_min_patches: int = 16mask_group_max_patches: typing.Optional[int] = Nonemask_group_min_aspect_ratio: float = 0.3mask_group_max_aspect_ratio: typing.Optional[float] = Nonereturn_codebook_pixels: bool = Falsecodebook_do_resize: bool = Truecodebook_size: bool = Nonecodebook_resample: int = <Resampling.LANCZOS: 1>codebook_do_center_crop: bool = Truecodebook_crop_size: int = Nonecodebook_do_rescale: bool = Truecodebook_rescale_factor: typing.Union[int, float] = 0.00392156862745098codebook_do_map_pixels: bool = Truecodebook_do_normalize: bool = Truecodebook_image_mean: typing.Union[float, typing.Iterable[float], NoneType] = Nonecodebook_image_std: typing.Union[float, typing.Iterable[float], NoneType] = None**kwargs )

Parameters

  • do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in preprocess.

  • size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}): Size of the image after resizing. Can be overridden by the size parameter in preprocess.

  • resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Can be overridden by the resample parameter in preprocess.

  • do_center_crop (bool, optional, defaults to True) — Whether to center crop the images. Can be overridden by the do_center_crop parameter in preprocess.

  • crop_size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}): Size of image after the center crop (crop_size["height"], crop_size["width"]). Can be overridden by the crop_size parameter in preprocess.

  • do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in preprocess.

  • rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in preprocess.

  • do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in preprocess.

  • image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method.

  • image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.

  • return_image_mask (bool, optional, defaults to False) — Whether to return the image mask. Can be overridden by the return_image_mask parameter in preprocess.

  • input_size_patches (int, optional, defaults to 14) — Number of patches in the image in height and width direction. 14x14 = 196 total patches. Can be overridden by the input_size_patches parameter in preprocess.

  • total_mask_patches (int, optional, defaults to 75) — Total number of patches that should be masked. Can be overridden by the total_mask_patches parameter in preprocess.

  • mask_group_min_patches (int, optional, defaults to 16) — Minimum number of patches that should be masked. Can be overridden by the mask_group_min_patches parameter in preprocess.

  • mask_group_max_patches (int, optional) — Maximum number of patches that should be masked. Can be overridden by the mask_group_max_patches parameter in preprocess.

  • mask_group_min_aspect_ratio (float, optional, defaults to 0.3) — Minimum aspect ratio of the mask window. Can be overridden by the mask_group_min_aspect_ratio parameter in preprocess.

  • mask_group_max_aspect_ratio (float, optional) — Maximum aspect ratio of the mask window. Can be overridden by the mask_group_max_aspect_ratio parameter in preprocess.

  • codebook_do_resize (bool, optional, defaults to True) — Whether to resize the input for codebook to a certain. Can be overridden by the codebook_do_resize parameter in preprocess. codebook_size.

  • codebook_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}): Resize the input for codebook to the given size. Can be overridden by the codebook_size parameter in preprocess.

  • codebook_resample (PILImageResampling, optional, defaults to PILImageResampling.LANCZOS) — Resampling filter to use if resizing the codebook image. Can be overridden by the codebook_resample parameter in preprocess.

  • codebook_do_center_crop (bool, optional, defaults to True) — Whether to crop the input for codebook at the center. If the input size is smaller than codebook_crop_size along any edge, the image is padded with 0’s and then center cropped. Can be overridden by the codebook_do_center_crop parameter in preprocess.

  • codebook_crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}): Desired output size for codebook input when applying center-cropping. Can be overridden by the codebook_crop_size parameter in preprocess.

  • codebook_do_rescale (bool, optional, defaults to True) — Whether to rescale the input for codebook by the specified scale codebook_rescale_factor. Can be overridden by the codebook_do_rescale parameter in preprocess.

  • codebook_rescale_factor (int or float, optional, defaults to 1/255) — Defines the scale factor to use if rescaling the codebook image. Can be overridden by the codebook_rescale_factor parameter in preprocess.

  • codebook_do_map_pixels (bool, optional, defaults to True) — Whether to map the pixel values of the codebook input to (1 - 2e)x + e. Can be overridden by the codebook_do_map_pixels parameter in preprocess.

  • codebook_do_normalize (bool, optional, defaults to True) — Whether or not to normalize the input for codebook with codebook_image_mean and codebook_image_std. Can be overridden by the codebook_do_normalize parameter in preprocess.

  • codebook_image_mean (Optional[Union[float, Iterable[float]]], optional, defaults to [0, 0, 0]) — The sequence of means for each channel, to be used when normalizing images for codebook. Can be overridden by the codebook_image_mean parameter in preprocess.

  • codebook_image_std (Optional[Union[float, Iterable[float]]], optional, defaults to [0.5, 0.5, 0.5]) — The sequence of standard deviations for each channel, to be used when normalizing images for codebook. Can be overridden by the codebook_image_std parameter in preprocess.

Constructs a Flava image processor.

preprocess

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]do_resize: typing.Optional[bool] = Nonesize: typing.Dict[str, int] = Noneresample: Resampling = Nonedo_center_crop: typing.Optional[bool] = Nonecrop_size: typing.Union[typing.Dict[str, int], NoneType] = Nonedo_rescale: typing.Optional[bool] = Nonerescale_factor: typing.Optional[float] = Nonedo_normalize: typing.Optional[bool] = Noneimage_mean: typing.Union[float, typing.List[float], NoneType] = Noneimage_std: typing.Union[float, typing.List[float], NoneType] = Nonereturn_image_mask: typing.Optional[bool] = Noneinput_size_patches: typing.Optional[int] = Nonetotal_mask_patches: typing.Optional[int] = Nonemask_group_min_patches: typing.Optional[int] = Nonemask_group_max_patches: typing.Optional[int] = Nonemask_group_min_aspect_ratio: typing.Optional[float] = Nonemask_group_max_aspect_ratio: typing.Optional[float] = Nonereturn_codebook_pixels: typing.Optional[bool] = Nonecodebook_do_resize: typing.Optional[bool] = Nonecodebook_size: typing.Union[typing.Dict[str, int], NoneType] = Nonecodebook_resample: typing.Optional[int] = Nonecodebook_do_center_crop: typing.Optional[bool] = Nonecodebook_crop_size: typing.Union[typing.Dict[str, int], NoneType] = Nonecodebook_do_rescale: typing.Optional[bool] = Nonecodebook_rescale_factor: typing.Optional[float] = Nonecodebook_do_map_pixels: typing.Optional[bool] = Nonecodebook_do_normalize: typing.Optional[bool] = Nonecodebook_image_mean: typing.Optional[typing.Iterable[float]] = Nonecodebook_image_std: typing.Optional[typing.Iterable[float]] = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonedata_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None**kwargs )

Parameters

  • images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.

  • do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image.

  • size (Dict[str, int], optional, defaults to self.size) — Size of the image.

  • resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only has an effect if do_resize is set to True.

  • do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image.

  • crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the center crop. Only has an effect if do_center_crop is set to True.

  • do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1].

  • rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True.

  • do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image.

  • image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean.

  • image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation.

  • return_image_mask (bool, optional, defaults to self.return_image_mask) — Whether to return the image mask.

  • input_size_patches (int, optional, defaults to self.input_size_patches) — Size of the patches to extract from the image.

  • total_mask_patches (int, optional, defaults to self.total_mask_patches) — Total number of patches to extract from the image.

  • mask_group_min_patches (int, optional, defaults to self.mask_group_min_patches) — Minimum number of patches to extract from the image.

  • mask_group_max_patches (int, optional, defaults to self.mask_group_max_patches) — Maximum number of patches to extract from the image.

  • mask_group_min_aspect_ratio (float, optional, defaults to self.mask_group_min_aspect_ratio) — Minimum aspect ratio of the patches to extract from the image.

  • mask_group_max_aspect_ratio (float, optional, defaults to self.mask_group_max_aspect_ratio) — Maximum aspect ratio of the patches to extract from the image.

  • return_codebook_pixels (bool, optional, defaults to self.return_codebook_pixels) — Whether to return the codebook pixels.

  • codebook_do_resize (bool, optional, defaults to self.codebook_do_resize) — Whether to resize the codebook pixels.

  • codebook_size (Dict[str, int], optional, defaults to self.codebook_size) — Size of the codebook pixels.

  • codebook_resample (int, optional, defaults to self.codebook_resample) — Resampling filter to use if resizing the codebook pixels. This can be one of the enum PILImageResampling, Only has an effect if codebook_do_resize is set to True.

  • codebook_do_center_crop (bool, optional, defaults to self.codebook_do_center_crop) — Whether to center crop the codebook pixels.

  • codebook_crop_size (Dict[str, int], optional, defaults to self.codebook_crop_size) — Size of the center crop of the codebook pixels. Only has an effect if codebook_do_center_crop is set to True.

  • codebook_do_rescale (bool, optional, defaults to self.codebook_do_rescale) — Whether to rescale the codebook pixels values between [0 - 1].

  • codebook_rescale_factor (float, optional, defaults to self.codebook_rescale_factor) — Rescale factor to rescale the codebook pixels by if codebook_do_rescale is set to True.

  • codebook_do_map_pixels (bool, optional, defaults to self.codebook_do_map_pixels) — Whether to map the codebook pixels values.

  • codebook_do_normalize (bool, optional, defaults to self.codebook_do_normalize) — Whether to normalize the codebook pixels.

  • codebook_image_mean (float or List[float], optional, defaults to self.codebook_image_mean) — Codebook pixels mean to normalize the codebook pixels by if codebook_do_normalize is set to True.

  • codebook_image_std (float or List[float], optional, defaults to self.codebook_image_std) — Codebook pixels standard deviation to normalize the codebook pixels by if codebook_do_normalize is set to True.

  • return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of:

    • Unset: Return a list of np.ndarray.

    • TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.

    • TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.

    • TensorType.NUMPY or 'np': Return a batch of type np.ndarray.

    • TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.

  • data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of:

    • ChannelDimension.FIRST: image in (num_channels, height, width) format.

    • ChannelDimension.LAST: image in (height, width, num_channels) format.

  • input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:

    • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.

    • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.

    • "none" or ChannelDimension.NONE: image in (height, width) format.

Preprocess an image or batch of images.

FlavaForPreTraining

class transformers.FlavaForPreTraining

( config: FlavaConfigimage_codebook: typing.Optional[torch.nn.modules.module.Module] = None )

Parameters

  • image_codebook (nn.Module) — If passed, the image codebook will be set to this. Otherwise. it will be initialized using the image_codebook_config defined in the config first as the first parameter.

The FLAVA model for pretraining which outputs losses, embeddings, logits and transformer outputs.

forward

( input_ids: typing.Optional[torch.LongTensor] = Noneinput_ids_masked: typing.Optional[torch.LongTensor] = Nonepixel_values: typing.Optional[torch.FloatTensor] = Nonecodebook_pixel_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Noneimage_attention_mask: typing.Optional[torch.Tensor] = Noneskip_unmasked_multimodal_encoder: bool = Nonemlm_labels: typing.Optional[torch.Tensor] = Nonemim_labels: typing.Optional[torch.Tensor] = Noneitm_labels: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: bool = Truereturn_dict: typing.Optional[bool] = Nonereturn_loss: typing.Optional[bool] = None ) → transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor)

Parameters

  • token_type_ids (torch.LongTensor of shape (batch_size, text_seq_len), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

  • bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).

  • interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings.

  • image_attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) — Mask to avoid performing attention on padding token indices specifically for images. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • skip_unmasked_multimodal_encoder (bool, optional) — Skip any calculations for multimodal encoder for unmasked inputs. FLAVA pretraining doesn’t need unmasked multimodal embeddings or outputs as of now.

  • mlm_labels (torch.LongTensor of shape (batch_size, text_seq_len), optional) — Labels for computing the left-to-right language and multimodal masked modeling loss (next word prediction). Indices should be in [-100, 0, ..., text_config.vocab_size - 1] (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., text_config.vocab_size - 1].

  • itm_labels (torch.LongTensor of shape (batch_size, 1), optional) — Labels for computing the image-text matching loss. 0 means the pairs don’t match and 1 means they match. The pairs with 0 will be skipped for calculation of MMM and global contrastive losses as well.

  • return_loss (bool, optional, default to None) — Whether to return calculated loss or not.

  • attention_mask (torch.FloatTensor of shape (batch_size, text_seq_len), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • Examples —

Returns

transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor)

A transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs.

  • loss (torch.FloatTensor, optional, returned when return_loss is True) — Total loss calculated for this model.

  • loss_info (FlavaLosses) — Detailed info for FLAVA Pretraining losses. Check FlavaLosses class description for the information on the keys.

  • mim_logits (torch.FloatTensor of shape (batch_size, num_image_patches, image_vocab_size) or of shape (total_masked_patches, image_vocab_size) , optional, returned when pixel_values are present and input_ids_masked are not) — The logits for MIM unimodal loss. Uses book_masked_pos to get masked patches. The flattened output is returned when bool_masked_pos has some of the patches masked.

  • mlm_logits (torch.FloatTensor of shape (batch_size, text_seq_length, text_vocab_size) or of shape (total_masked_seq_length, text_vocab_size), optional, returned when input_ids_masked are present and pixel_values are not) — The logits for MLM unimodal loss. The flattened output is returned when input_ids_masked has some of the tokens masked.

  • itm_logits (torch.FloatTensor of shape (batch_size, 2), optional, returned when input_ids_masked and pixel_values are present) — The logits for ITM loss. Note that ITM loss is calculated on masked pairs in FLAVA.

  • mmm_image_logits (torch.FloatTensor of shape (batch_size, num_image_patches, image_vocab_size) or of shape(total_masked_patches, image_vocab_size), optional, returned when pixel_values and input_ids_masked are present) — The logits for MMM image multimodal loss. Uses book_masked_pos to get masked patches. The flattened output is returned when bool_masked_pos has some of the patches masked.

  • mmm_text_logits (torch.FloatTensor of shape (batch_size, text_seq_length, text_vocab_size) or of shape ((total_masked_seq_length, text_vocab_size)), *optional*, returned when pixel_valuesandinput_ids_maskedare present) -- The logits for MMM text multimodal loss. The flattened output is returned wheninput_ids_masked` has some of the tokens masked.

  • contrastive_logits_per_image (torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeddings and text_embeddings but passed through FLAVA’s image_projection and text_projection layers respectively. This represents the image-text similarity scores. This is calculated on unmasked images and texts.

  • contrastive_logits_per_text (torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeddings and image_embeddings but passed through FLAVA’s text_projection and image_projection layers respectively. This is calculated on unmasked images and texts.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

FlavaModel

class transformers.FlavaModel

( config: FlavaConfig )

Parameters

forward

( input_ids: typing.Optional[torch.LongTensor] = Nonepixel_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Noneimage_attention_mask: typing.Optional[torch.Tensor] = Noneskip_multimodal_encoder: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: bool = Truereturn_dict: typing.Optional[bool] = None ) → transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor)

Parameters

  • bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).

  • interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings.

  • token_type_ids (torch.LongTensor of shape (batch_size, image_num_patches + text_seq_len), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

  • attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • skip_multimodal_encoder (bool, optional) — Skip any calculations for multimodal encoder. Useful if multimodal encoding is not going to be used.

Returns

transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor)

A transformers.models.flava.modeling_flava.FlavaModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

Copied

>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, FlavaModel

>>> model = FlavaModel.from_pretrained("facebook/flava-full")
>>> processor = AutoProcessor.from_pretrained("facebook/flava-full")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(text=["a photo of a cat"], images=image, return_tensors="pt", padding=True)

>>> outputs = model(**inputs)
>>> logits_per_image = outputs.contrastive_logits_per_image  # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1)  # we can take the softmax to get the label probabilities

get_text_features

( input_ids: typing.Optional[torch.Tensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None )

Parameters

  • token_type_ids (torch.LongTensor of shape (batch_size, text_seq_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

  • attention_mask (torch.FloatTensor of shape (batch_size, text_seq_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

get_image_features

( pixel_values: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.BoolTensor] = Noneinterpolate_pos_encoding: typing.Optional[bool] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None )

Parameters

  • bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).

  • interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings.

  • attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

FlavaImageCodebook

class transformers.FlavaImageCodebook

( config: FlavaImageCodebookConfig**kwargs: typing.Any )

Parameters

The FLAVA’s image codebook model inspired from DALL-E’s original encoder. Outputs raw hidden states and can be used to generate image tokens for an image based on DALL-E’s vocab. Used to generate labels for MIM. Use get_codebook_indices to get image tokens for an image.

forward

( pixel_values: FloatTensor )

get_codebook_indices

( pixel_values: Tensor )

get_codebook_probs

( pixel_values: Tensor )

FlavaTextModel

class transformers.FlavaTextModel

( config: FlavaTextConfigadd_pooling_layer: bool = True )

Parameters

forward

Parameters

  • token_type_ids (torch.LongTensor of shape (batch_size, text_seq_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

    • 0 corresponds to a sentence A token,

  • attention_mask (torch.FloatTensor of shape (batch_size, text_seq_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Returns

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

>>> from transformers import AutoTokenizer, FlavaTextModel
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full")
>>> model = FlavaTextModel.from_pretrained("facebook/flava-full")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state

FlavaImageModel

class transformers.FlavaImageModel

( config: FlavaImageConfigadd_pooling_layer: bool = True )

Parameters

forward

Parameters

  • bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).

  • interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings.

  • attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Returns

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

>>> from transformers import AutoImageProcessor, FlavaImageModel
>>> import torch
>>> from datasets import load_dataset

>>> dataset = load_dataset("boincai/cats-image")
>>> image = dataset["test"]["image"][0]

>>> image_processor = AutoImageProcessor.from_pretrained("facebook/flava-full")
>>> model = FlavaImageModel.from_pretrained("facebook/flava-full")

>>> inputs = image_processor(image, return_tensors="pt")

>>> with torch.no_grad():
...     outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 197, 768]

FlavaMultimodalModel

class transformers.FlavaMultimodalModel

( config: FlavaMultimodalConfigadd_pooling_layer = True )

Parameters

forward

Parameters

  • hidden_states (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len, hidden_size)) — The concatenated hidden states of unimodal encoders.

  • attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:

    • 1 indicates the head is not masked,

    • 0 indicates the head is masked.

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Returns

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

>>> from transformers import AutoTokenizer, FlavaMultimodalModel
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full")
>>> model = FlavaMultimodalModel.from_pretrained("facebook/flava-full")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state

is the configuration class to store the configuration of a . It is used to instantiate FLAVA model according to the specified arguments, defining the text model, image model, image codebook and multimodal model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA architecture.

Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.

( image_config: FlavaImageConfigtext_config: FlavaTextConfigmultimodal_config: FlavaMultimodalConfigimage_codebook_config: FlavaImageCodebookConfig**kwargs ) →

Instantiate a (or a derived class) from flava text model configuration, flava image model configuration, flava multimodal model and flava codebook model configuration.

vocab_size (int, optional, defaults to 30522) — Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling .

type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling . Note that even though text encoder allows token_type_ids’s value as 2, for text-only pretraining and fine-tuning, only 1 is used similar to RoBERTa.

position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to . For more information on "relative_key_query", please refer to Method 4 in .

This is the configuration class to store the configuration of a . It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture.

Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA architecture.

Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.

vocab_size (int, optional, defaults to 8192) — Vocabulary size of the used in conjunction with for MIM (Masked Image Modeling) loss for FLAVA.

This is the configuration class to store the configuration of a . It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture.

Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA architecture.

Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.

This is the configuration class to store the configuration of a . It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture.

Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA architecture.

Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.

image_processor () — The image processor is a required input.

tokenizer () — The tokenizer is a required input.

offers all the functionalities of and . See the __call__() and for more information.

This method forwards all its arguments to BertTokenizerFast’s . Please refer to the docstring of this method for more information.

This method forwards all its arguments to BertTokenizerFast’s . Please refer to the docstring of this method for more information.

config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.

This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

input_ids_masked (torch.LongTensor of shape (batch_size, text_seq_len)) — Indices of input sequence tokens in the vocabulary. These ones are the masked version of the original task to be used with MLM. Indices can be obtained using along with DataCollatorForMaskedLanguageModeling. See and for details.

input_ids (torch.LongTensor of shape (batch_size, text_seq_len)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using . See and for details.

1 corresponds to a sentence B token.

pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using . See for details.

0 for tokens that are masked.

mim_labels (torch.LongTensor of shape (batch_size, image_num_patches), optional) — Labels for computing the image and multimodal masked modeling loss. Indices should be in [-100, 0, ..., image_config.vocab_size - 1]. Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., image_config.vocab_size - 1]. If not passed, they are generated automatically using the image codebook assigned to the model. By default, it uses . See to understand how to generate mim_labels.

0 for tokens that are masked.

return_dict (bool, optional) — Whether or not to return a instead of a plain tuple.

image_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of .

image_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the .

text_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids are present) — The text embeddings which are basically the pooled output of .

text_output (BaseModelOutputWithPooling, optional, returned when input_ids are present) — The output of the .

multimodal_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present and skip_unmasked_multimodal_encoder is None or False) — The multimodal embeddings which are basically the pooled output of .

multimodal_output (BaseModelOutputWithPooling, returned when input_ids and pixel_values are present and skip_unmasked_multimodal_encoder is None or False) — The output of the .

image_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of . Uses bool_masked_pos to create masked images.

image_masked_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the . Uses bool_masked_pos to create masked images.

text_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids_masked are present) — The text embeddings which are basically the pooled output of .

text_masked_output (BaseModelOutputWithPooling, optional, returned when input_ids_masked are present) — The output of the .

multimodal_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present) — The multimodal embeddings which are basically the pooled output of .

multimodal_masked_output (BaseModelOutputWithPooling, returned when input_ids_masked and pixel_values are present) — The output of the .

The forward method, overrides the __call__ special method.

config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.

The bare FLAVA Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using . See for details.

input_ids (torch.LongTensor of shape (batch_size, image_num_patches + text_seq_len)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using . See and for details.

1 corresponds to a sentence B token.

0 for tokens that are masked.

return_dict (bool, optional) — Whether or not to return a instead of a plain tuple.

image_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of .

image_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the .

text_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids are present) — The text embeddings which are basically the pooled output of .

text_output (BaseModelOutputWithPooling, optional, returned when input_ids are present) — The output of the .

multimodal_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present and skip_multimodal_encoder is None or False) — The multimodal embeddings which are basically the pooled output of .

multimodal_output (BaseModelOutputWithPooling, returned when input_ids and pixel_values are present and skip_multimodal_encoder is None or False) — The output of the .

The forward method, overrides the __call__ special method.

input_ids (torch.LongTensor of shape (batch_size, text_seq_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using . See and for details.

1 corresponds to a sentence B token.

0 for tokens that are masked.

return_dict (bool, optional) — Whether or not to return a instead of a plain tuple.

The forward method, overrides the __call__ special method.

pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using . See for details.

0 for tokens that are masked.

return_dict (bool, optional) — Whether or not to return a instead of a plain tuple.

The forward method, overrides the __call__ special method.

config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.

This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.

The bare FLAVA Text Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

( input_ids: typing.Optional[torch.Tensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonetoken_type_ids: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)

input_ids (torch.LongTensor of shape (batch_size, text_seq_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using . See and for details.

1 corresponds to a sentence B token.

0 for tokens that are masked.

return_dict (bool, optional) — Whether or not to return a instead of a plain tuple.

or tuple(torch.FloatTensor)

A or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration () and inputs.

The forward method, overrides the __call__ special method.

config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.

The bare FLAVA Image Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

( pixel_values: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.BoolTensor] = Noneinterpolate_pos_encoding: typing.Optional[bool] = Noneattention_mask: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)

pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using . See for details.

0 for tokens that are masked.

return_dict (bool, optional) — Whether or not to return a instead of a plain tuple.

or tuple(torch.FloatTensor)

A or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration () and inputs.

The forward method, overrides the __call__ special method.

config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.

The bare FLAVA Multimodal Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

( hidden_states: Tensorattention_mask: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)

0 for tokens that are masked.

return_dict (bool, optional) — Whether or not to return a instead of a plain tuple.

or tuple(torch.FloatTensor)

A or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration () and inputs.

The forward method, overrides the __call__ special method.

🌍
🌍
🌍
FLAVA: A Foundational Language And Vision Alignment Model
aps
here
<source>
FlavaTextConfig
FlavaImageConfig
FlavaMultimodalConfig
FlavaConfig
FlavaModel
facebook/flava-full
PretrainedConfig
PretrainedConfig
<source>
FlavaConfig
FlavaConfig
FlavaConfig
<source>
FlavaTextModel
FlavaTextModel
Self-Attention with Relative Position Representations (Shaw et al.)
Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)
FlavaTextModel
facebook/flava-full
PretrainedConfig
PretrainedConfig
<source>
FlavaImageCodebook
FlavaImageModel
FlavaImageModel
facebook/flava-full
PretrainedConfig
PretrainedConfig
<source>
FlavaMultimodalModel
facebook/flava-full
PretrainedConfig
PretrainedConfig
<source>
<source>
FlavaImageProcessor
BertTokenizerFast
FlavaProcessor
FlavaImageProcessor
BertTokenizerFast
decode()
<source>
batch_decode()
<source>
decode()
<source>
<source>
<source>
<source>
FlavaConfig
from_pretrained()
torch.nn.Module
<source>
AutoTokenizer
PreTrainedTokenizer.encode()
PreTrainedTokenizer.call()
What are input IDs?
AutoTokenizer
PreTrainedTokenizer.encode()
PreTrainedTokenizer.call()
What are input IDs?
What are token type IDs?
AutoImageProcessor
FlavaImageProcessor.call()
What are attention masks?
FlavaImageCodebook
FlavaImageCodebook
What are attention masks?
ModelOutput
FlavaImageModel
FlavaImageModel
FlavaTextModel
FlavaTextModel
FlavaTextModel
FlavaMultimodalModel
FlavaImageModel
FlavaImageModel
FlavaTextModel
FlavaTextModel
FlavaTextModel
FlavaMultimodalModel
FlavaForPreTraining
<source>
FlavaConfig
from_pretrained()
torch.nn.Module
<source>
AutoImageProcessor
FlavaImageProcessor.call()
AutoTokenizer
PreTrainedTokenizer.encode()
PreTrainedTokenizer.call()
What are input IDs?
What are token type IDs?
What are attention masks?
ModelOutput
FlavaImageModel
FlavaImageModel
FlavaTextModel
FlavaTextModel
FlavaTextModel
FlavaMultimodalModel
FlavaModel
<source>
AutoTokenizer
PreTrainedTokenizer.encode()
PreTrainedTokenizer.call()
What are input IDs?
What are token type IDs?
What are attention masks?
ModelOutput
FlavaModel
<source>
AutoImageProcessor
FlavaImageProcessor.call()
What are attention masks?
ModelOutput
FlavaModel
<source>
FlavaImageCodebookConfig
from_pretrained()
torch.nn.Module
<source>
<source>
<source>
<source>
FlavaTextConfig
from_pretrained()
torch.nn.Module
<source>
transformers.modeling_outputs.BaseModelOutputWithPooling
AutoTokenizer
PreTrainedTokenizer.encode()
PreTrainedTokenizer.call()
What are input IDs?
What are token type IDs?
What are attention masks?
ModelOutput
transformers.modeling_outputs.BaseModelOutputWithPooling
transformers.modeling_outputs.BaseModelOutputWithPooling
FlavaTextConfig
FlavaTextModel
<source>
FlavaImageConfig
from_pretrained()
torch.nn.Module
<source>
transformers.modeling_outputs.BaseModelOutputWithPooling
AutoImageProcessor
FlavaImageProcessor.call()
What are attention masks?
ModelOutput
transformers.modeling_outputs.BaseModelOutputWithPooling
transformers.modeling_outputs.BaseModelOutputWithPooling
FlavaImageConfig
FlavaImageModel
<source>
FlavaMultimodalConfig
from_pretrained()
torch.nn.Module
<source>
transformers.modeling_outputs.BaseModelOutputWithPooling
What are attention masks?
ModelOutput
transformers.modeling_outputs.BaseModelOutputWithPooling
transformers.modeling_outputs.BaseModelOutputWithPooling
FlavaMultimodalConfig
FlavaMultimodalModel