ViT Hybrid
Hybrid Vision Transformer (ViT Hybrid)
Overview
The hybrid Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. Itβs the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the plain Vision Transformer, by leveraging a convolutional backbone (specifically, BiT) whose features are used as initial βtokensβ for the Transformer.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
This model was contributed by nielsr. The original code (written in JAX) can be found here.
Resources
A list of official BOINC AI and community (indicated by π) resources to help you get started with ViT Hybrid.
Image Classification
- ViTHybridForImageClassification is supported by this example script and notebook. 
- See also: Image classification task guide 
If youβre interested in submitting a resource to be included here, please feel free to open a Pull Request and weβll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTHybridConfig
class transformers.ViTHybridConfig
( backbone_config = Nonehidden_size = 768num_hidden_layers = 12num_attention_heads = 12intermediate_size = 3072hidden_act = 'gelu'hidden_dropout_prob = 0.0attention_probs_dropout_prob = 0.0initializer_range = 0.02layer_norm_eps = 1e-12image_size = 224patch_size = 1num_channels = 3backbone_featmap_shape = [1, 1024, 24, 24]qkv_bias = True**kwargs )
Parameters
- hidden_size ( - int, optional, defaults to 768) β Dimensionality of the encoder layers and the pooler layer.
- num_hidden_layers ( - int, optional, defaults to 12) β Number of hidden layers in the Transformer encoder.
- num_attention_heads ( - int, optional, defaults to 12) β Number of attention heads for each attention layer in the Transformer encoder.
- intermediate_size ( - int, optional, defaults to 3072) β Dimensionality of the βintermediateβ (i.e., feed-forward) layer in the Transformer encoder.
- hidden_act ( - stror- function, optional, defaults to- "gelu") β The non-linear activation function (function or string) in the encoder and pooler. If string,- "gelu",- "relu",- "selu"and- "gelu_new"are supported.
- hidden_dropout_prob ( - float, optional, defaults to 0.1) β The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
- attention_probs_dropout_prob ( - float, optional, defaults to 0.1) β The dropout ratio for the attention probabilities.
- initializer_range ( - float, optional, defaults to 0.02) β The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- layer_norm_eps ( - float, optional, defaults to 1e-12) β The epsilon used by the layer normalization layers.
- image_size ( - int, optional, defaults to 224) β The size (resolution) of each image.
- patch_size ( - int, optional, defaults to 1) β The size (resolution) of each patch.
- num_channels ( - int, optional, defaults to 3) β The number of input channels.
- qkv_bias ( - bool, optional, defaults to- True) β Whether to add a bias to the queries, keys and values.
- backbone_config ( - Union[Dict[str, Any], PretrainedConfig], optional, defaults to- None) β The configuration of the backbone in a dictionary or the config object of the backbone.
- backbone_featmap_shape ( - List[int], optional, defaults to- [1, 1024, 24, 24]) β Used only for the- hybridembedding type. The shape of the feature maps of the backbone.
This is the configuration class to store the configuration of a ViTHybridModel. It is used to instantiate a ViT Hybrid model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViT Hybrid google/vit-hybrid-base-bit-384 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Copied
>>> from transformers import ViTHybridConfig, ViTHybridModel
>>> # Initializing a ViT Hybrid vit-hybrid-base-bit-384 style configuration
>>> configuration = ViTHybridConfig()
>>> # Initializing a model (with random weights) from the vit-hybrid-base-bit-384 style configuration
>>> model = ViTHybridModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configViTHybridImageProcessor
class transformers.ViTHybridImageProcessor
( do_resize: bool = Truesize: typing.Dict[str, int] = Noneresample: Resampling = <Resampling.BICUBIC: 3>do_center_crop: bool = Truecrop_size: typing.Dict[str, int] = Nonedo_rescale: bool = Truerescale_factor: typing.Union[int, float] = 0.00392156862745098do_normalize: bool = Trueimage_mean: typing.Union[float, typing.List[float], NoneType] = Noneimage_std: typing.Union[float, typing.List[float], NoneType] = Nonedo_convert_rgb: bool = True**kwargs )
Parameters
- do_resize ( - bool, optional, defaults to- True) β Whether to resize the imageβs (height, width) dimensions to the specified- size. Can be overridden by- do_resizein the- preprocessmethod.
- size ( - Dict[str, int]optional, defaults to- {"shortest_edge" -- 224}): Size of the image after resizing. The shortest edge of the image is resized to size[βshortest_edgeβ], with the longest edge resized to keep the input aspect ratio. Can be overridden by- sizein the- preprocessmethod.
- resample ( - PILImageResampling, optional, defaults to- PILImageResampling.BICUBIC) β Resampling filter to use if resizing the image. Can be overridden by- resamplein the- preprocessmethod.
- do_center_crop ( - bool, optional, defaults to- True) β Whether to center crop the image to the specified- crop_size. Can be overridden by- do_center_cropin the- preprocessmethod.
- crop_size ( - Dict[str, int]optional, defaults to 224) β Size of the output image after applying- center_crop. Can be overridden by- crop_sizein the- preprocessmethod.
- do_rescale ( - bool, optional, defaults to- True) β Whether to rescale the image by the specified scale- rescale_factor. Can be overridden by- do_rescalein the- preprocessmethod.
- rescale_factor ( - intor- float, optional, defaults to- 1/255) β Scale factor to use if rescaling the image. Can be overridden by- rescale_factorin the- preprocessmethod. do_normalize β Whether to normalize the image. Can be overridden by- do_normalizein the- preprocessmethod.
- image_mean ( - floator- List[float], optional, defaults to- IMAGENET_STANDARD_MEAN) β Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the- image_meanparameter in the- preprocessmethod.
- image_std ( - floator- List[float], optional, defaults to- IMAGENET_STANDARD_STD) β Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the- image_stdparameter in the- preprocessmethod. Can be overridden by the- image_stdparameter in the- preprocessmethod.
- do_convert_rgb ( - bool, optional, defaults to- True) β Whether to convert the image to RGB.
Constructs a ViT Hybrid image processor.
preprocess
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]do_resize: bool = Nonesize: typing.Dict[str, int] = Noneresample: Resampling = Nonedo_center_crop: bool = Nonecrop_size: int = Nonedo_rescale: bool = Nonerescale_factor: float = Nonedo_normalize: bool = Noneimage_mean: typing.Union[float, typing.List[float], NoneType] = Noneimage_std: typing.Union[float, typing.List[float], NoneType] = Nonedo_convert_rgb: bool = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonedata_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None**kwargs )
Parameters
- images ( - ImageInput) β Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set- do_rescale=False.
- do_resize ( - bool, optional, defaults to- self.do_resize) β Whether to resize the image.
- size ( - Dict[str, int], optional, defaults to- self.size) β Size of the image after resizing. Shortest edge of the image is resized to size[βshortest_edgeβ], with the longest edge resized to keep the input aspect ratio.
- resample ( - int, optional, defaults to- self.resample) β Resampling filter to use if resizing the image. This can be one of the enum- PILImageResampling. Only has an effect if- do_resizeis set to- True.
- do_center_crop ( - bool, optional, defaults to- self.do_center_crop) β Whether to center crop the image.
- crop_size ( - Dict[str, int], optional, defaults to- self.crop_size) β Size of the center crop. Only has an effect if- do_center_cropis set to- True.
- do_rescale ( - bool, optional, defaults to- self.do_rescale) β Whether to rescale the image.
- rescale_factor ( - float, optional, defaults to- self.rescale_factor) β Rescale factor to rescale the image by if- do_rescaleis set to- True.
- do_normalize ( - bool, optional, defaults to- self.do_normalize) β Whether to normalize the image.
- image_mean ( - floator- List[float], optional, defaults to- self.image_mean) β Image mean to use for normalization. Only has an effect if- do_normalizeis set to- True.
- image_std ( - floator- List[float], optional, defaults to- self.image_std) β Image standard deviation to use for normalization. Only has an effect if- do_normalizeis set to- True.
- do_convert_rgb ( - bool, optional, defaults to- self.do_convert_rgb) β Whether to convert the image to RGB.
- return_tensors ( - stror- TensorType, optional) β The type of tensors to return. Can be one of:- Unset: Return a list of - np.ndarray.
- TensorType.TENSORFLOWor- 'tf': Return a batch of type- tf.Tensor.
- TensorType.PYTORCHor- 'pt': Return a batch of type- torch.Tensor.
- TensorType.NUMPYor- 'np': Return a batch of type- np.ndarray.
- TensorType.JAXor- 'jax': Return a batch of type- jax.numpy.ndarray.
 
- data_format ( - ChannelDimensionor- str, optional, defaults to- ChannelDimension.FIRST) β The channel dimension format for the output image. Can be one of:- ChannelDimension.FIRST: image in (num_channels, height, width) format.
- ChannelDimension.LAST: image in (height, width, num_channels) format.
- Unset: defaults to the channel dimension format of the input image. 
 
- input_data_format ( - ChannelDimensionor- str, optional) β The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:- "channels_first"or- ChannelDimension.FIRST: image in (num_channels, height, width) format.
- "channels_last"or- ChannelDimension.LAST: image in (height, width, num_channels) format.
- "none"or- ChannelDimension.NONE: image in (height, width) format.
 
Preprocess an image or batch of images.
ViTHybridModel
class transformers.ViTHybridModel
( config: ViTHybridConfigadd_pooling_layer: bool = Trueuse_mask_token: bool = False )
Parameters
- config (ViTHybridConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. 
The bare ViT Hybrid Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( pixel_values: typing.Optional[torch.Tensor] = Nonebool_masked_pos: typing.Optional[torch.BoolTensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Noneinterpolate_pos_encoding: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) β transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
- pixel_values ( - torch.FloatTensorof shape- (batch_size, num_channels, height, width)) β Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTHybridImageProcessor.call() for details.
- head_mask ( - torch.FloatTensorof shape- (num_heads,)or- (num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in- [0, 1]:- 1 indicates the head is not masked, 
- 0 indicates the head is masked. 
 
- output_attentions ( - bool, optional) β Whether or not to return the attentions tensors of all attention layers. See- attentionsunder returned tensors for more detail.
- output_hidden_states ( - bool, optional) β Whether or not to return the hidden states of all layers. See- hidden_statesunder returned tensors for more detail.
- return_dict ( - bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.
- bool_masked_pos ( - torch.BoolTensorof shape- (batch_size, num_patches), optional) β Boolean masked positions. Indicates which patches are masked (1) and which arenβt (0).
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ViTHybridConfig) and inputs.
- last_hidden_state ( - torch.FloatTensorof shape- (batch_size, sequence_length, hidden_size)) β Sequence of hidden-states at the output of the last layer of the model.
- pooler_output ( - torch.FloatTensorof shape- (batch_size, hidden_size)) β Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
- hidden_states ( - tuple(torch.FloatTensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) β Tuple of- torch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. 
- attentions ( - tuple(torch.FloatTensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) β Tuple of- torch.FloatTensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
The ViTHybridModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
>>> from transformers import AutoImageProcessor, ViTHybridModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("boincai/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-hybrid-base-bit-384")
>>> model = ViTHybridModel.from_pretrained("google/vit-hybrid-base-bit-384")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
...     outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 197, 768]ViTHybridForImageClassification
class transformers.ViTHybridForImageClassification
( config: ViTHybridConfig )
Parameters
- config (ViTHybridConfig) β Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. 
ViT Hybrid Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
( pixel_values: typing.Optional[torch.Tensor] = Nonehead_mask: typing.Optional[torch.Tensor] = Nonelabels: typing.Optional[torch.Tensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Noneinterpolate_pos_encoding: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) β transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
- pixel_values ( - torch.FloatTensorof shape- (batch_size, num_channels, height, width)) β Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTHybridImageProcessor.call() for details.
- head_mask ( - torch.FloatTensorof shape- (num_heads,)or- (num_layers, num_heads), optional) β Mask to nullify selected heads of the self-attention modules. Mask values selected in- [0, 1]:- 1 indicates the head is not masked, 
- 0 indicates the head is masked. 
 
- output_attentions ( - bool, optional) β Whether or not to return the attentions tensors of all attention layers. See- attentionsunder returned tensors for more detail.
- output_hidden_states ( - bool, optional) β Whether or not to return the hidden states of all layers. See- hidden_statesunder returned tensors for more detail.
- return_dict ( - bool, optional) β Whether or not to return a ModelOutput instead of a plain tuple.
- labels ( - torch.LongTensorof shape- (batch_size,), optional) β Labels for computing the image classification/regression loss. Indices should be in- [0, ..., config.num_labels - 1]. If- config.num_labels == 1a regression loss is computed (Mean-Square loss), If- config.num_labels > 1a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ViTHybridConfig) and inputs.
- loss ( - torch.FloatTensorof shape- (1,), optional, returned when- labelsis provided) β Classification (or regression if config.num_labels==1) loss.
- logits ( - torch.FloatTensorof shape- (batch_size, config.num_labels)) β Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states ( - tuple(torch.FloatTensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) β Tuple of- torch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape- (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage.
- attentions ( - tuple(torch.FloatTensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) β Tuple of- torch.FloatTensor(one for each layer) of shape- (batch_size, num_heads, patch_size, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
The ViTHybridForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
>>> from transformers import AutoImageProcessor, ViTHybridForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("boincai/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-hybrid-base-bit-384")
>>> model = ViTHybridForImageClassification.from_pretrained("google/vit-hybrid-base-bit-384")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
...     logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby catLast updated
