MobileViT
Last updated
Last updated
The MobileViT model was proposed in by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers.
The abstract from the paper is the following:
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
Tips:
MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow for a lightweight introduction.
One can use to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB).
The available image classification checkpoints are pre-trained on (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
The segmentation model uses a head. The available semantic segmentation checkpoints are pre-trained on .
As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with .
You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a TensorFlow Lite model:
Copied
The resulting model will be just about an MB making it a good fit for mobile applications where resources and network bandwidth can be constrained.
A list of official BOINC AI and community (indicated by 🌎) resources to help you get started with MobileViT.
Image Classification
Semantic segmentation
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
( num_channels = 3image_size = 256patch_size = 2hidden_sizes = [144, 192, 240]neck_hidden_sizes = [16, 32, 64, 96, 128, 160, 640]num_attention_heads = 4mlp_ratio = 2.0expand_ratio = 4.0hidden_act = 'silu'conv_kernel_size = 3output_stride = 32hidden_dropout_prob = 0.1attention_probs_dropout_prob = 0.0classifier_dropout_prob = 0.1initializer_range = 0.02layer_norm_eps = 1e-05qkv_bias = Trueaspp_out_channels = 256atrous_rates = [6, 12, 18]aspp_dropout_prob = 0.1semantic_loss_ignore_index = 255**kwargs )
Parameters
num_channels (int
, optional, defaults to 3) — The number of input channels.
image_size (int
, optional, defaults to 256) — The size (resolution) of each image.
patch_size (int
, optional, defaults to 2) — The size (resolution) of each patch.
hidden_sizes (List[int]
, optional, defaults to [144, 192, 240]
) — Dimensionality (hidden size) of the Transformer encoders at each stage.
neck_hidden_sizes (List[int]
, optional, defaults to [16, 32, 64, 96, 128, 160, 640]
) — The number of channels for the feature maps of the backbone.
num_attention_heads (int
, optional, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder.
mlp_ratio (float
, optional, defaults to 2.0) — The ratio of the number of channels in the output of the MLP to the number of channels in the input.
expand_ratio (float
, optional, defaults to 4.0) — Expansion factor for the MobileNetv2 layers.
hidden_act (str
or function
, optional, defaults to "silu"
) — The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
conv_kernel_size (int
, optional, defaults to 3) — The size of the convolutional kernel in the MobileViT layer.
output_stride (int
, optional
, defaults to 32) — The ratio of the spatial resolution of the output to the resolution of the input image.
hidden_dropout_prob (float
, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the Transformer encoder.
attention_probs_dropout_prob (float
, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
classifier_dropout_prob (float
, optional, defaults to 0.1) — The dropout ratio for attached classifiers.
initializer_range (float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float
, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers.
qkv_bias (bool
, optional, defaults to True
) — Whether to add a bias to the queries, keys and values.
aspp_out_channels (int
, optional
, defaults to 256) — Number of output channels used in the ASPP layer for semantic segmentation.
atrous_rates (List[int]
, optional, defaults to [6, 12, 18]
) — Dilation (atrous) factors used in the ASPP layer for semantic segmentation.
aspp_dropout_prob (float
, optional, defaults to 0.1) — The dropout ratio for the ASPP layer for semantic segmentation.
semantic_loss_ignore_index (int
, optional, defaults to 255) — The index that is ignored by the loss function of the semantic segmentation model.
Example:
Copied
( *args**kwargs )
__call__
( images**kwargs )
Preprocess an image or a batch of images.
post_process_semantic_segmentation
( outputstarget_sizes: typing.List[typing.Tuple] = None ) → semantic_segmentation
Parameters
target_sizes (List[Tuple]
of length batch_size
, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor]
of length batch_size
, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes
is specified). Each entry of each torch.Tensor
correspond to a semantic class id.
( do_resize: bool = Truesize: typing.Dict[str, int] = Noneresample: Resampling = <Resampling.BILINEAR: 2>do_rescale: bool = Truerescale_factor: typing.Union[int, float] = 0.00392156862745098do_center_crop: bool = Truecrop_size: typing.Dict[str, int] = Nonedo_flip_channel_order: bool = True**kwargs )
Parameters
do_resize (bool
, optional, defaults to True
) — Whether to resize the image’s (height, width) dimensions to the specified size
. Can be overridden by the do_resize
parameter in the preprocess
method.
size (Dict[str, int]
optional, defaults to {"shortest_edge" -- 224}
): Controls the size of the output image after resizing. Can be overridden by the size
parameter in the preprocess
method.
resample (PILImageResampling
, optional, defaults to PILImageResampling.BILINEAR
) — Defines the resampling filter to use if resizing the image. Can be overridden by the resample
parameter in the preprocess
method.
do_rescale (bool
, optional, defaults to True
) — Whether to rescale the image by the specified scale rescale_factor
. Can be overridden by the do_rescale
parameter in the preprocess
method.
rescale_factor (int
or float
, optional, defaults to 1/255
) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor
parameter in the preprocess
method.
do_center_crop (bool
, optional, defaults to True
) — Whether to crop the input at the center. If the input size is smaller than crop_size
along any edge, the image is padded with 0’s and then center cropped. Can be overridden by the do_center_crop
parameter in the preprocess
method.
crop_size (Dict[str, int]
, optional, defaults to {"height" -- 256, "width": 256}
): Desired output size (size["height"], size["width"])
when applying center-cropping. Can be overridden by the crop_size
parameter in the preprocess
method.
do_flip_channel_order (bool
, optional, defaults to True
) — Whether to flip the color channels from RGB to BGR. Can be overridden by the do_flip_channel_order
parameter in the preprocess
method.
Constructs a MobileViT image processor.
preprocess
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]do_resize: bool = Nonesize: typing.Dict[str, int] = Noneresample: Resampling = Nonedo_rescale: bool = Nonerescale_factor: float = Nonedo_center_crop: bool = Nonecrop_size: typing.Dict[str, int] = Nonedo_flip_channel_order: bool = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonedata_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None**kwargs )
Parameters
images (ImageInput
) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False
.
do_resize (bool
, optional, defaults to self.do_resize
) — Whether to resize the image.
size (Dict[str, int]
, optional, defaults to self.size
) — Size of the image after resizing.
resample (int
, optional, defaults to self.resample
) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling
, Only has an effect if do_resize
is set to True
.
do_rescale (bool
, optional, defaults to self.do_rescale
) — Whether to rescale the image by rescale factor.
rescale_factor (float
, optional, defaults to self.rescale_factor
) — Rescale factor to rescale the image by if do_rescale
is set to True
.
do_center_crop (bool
, optional, defaults to self.do_center_crop
) — Whether to center crop the image.
crop_size (Dict[str, int]
, optional, defaults to self.crop_size
) — Size of the center crop if do_center_crop
is set to True
.
do_flip_channel_order (bool
, optional, defaults to self.do_flip_channel_order
) — Whether to flip the channel order of the image.
return_tensors (str
or TensorType
, optional) — The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray
.
TensorType.TENSORFLOW
or 'tf'
: Return a batch of type tf.Tensor
.
TensorType.PYTORCH
or 'pt'
: Return a batch of type torch.Tensor
.
TensorType.NUMPY
or 'np'
: Return a batch of type np.ndarray
.
TensorType.JAX
or 'jax'
: Return a batch of type jax.numpy.ndarray
.
data_format (ChannelDimension
or str
, optional, defaults to ChannelDimension.FIRST
) — The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST
: image in (num_channels, height, width) format.
ChannelDimension.LAST
: image in (height, width, num_channels) format.
input_data_format (ChannelDimension
or str
, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
"channels_first"
or ChannelDimension.FIRST
: image in (num_channels, height, width) format.
"channels_last"
or ChannelDimension.LAST
: image in (height, width, num_channels) format.
"none"
or ChannelDimension.NONE
: image in (height, width) format.
Preprocess an image or batch of images.
post_process_semantic_segmentation
( outputstarget_sizes: typing.List[typing.Tuple] = None ) → semantic_segmentation
Parameters
target_sizes (List[Tuple]
of length batch_size
, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor]
of length batch_size
, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes
is specified). Each entry of each torch.Tensor
correspond to a semantic class id.
( config: MobileViTConfigexpand_output: bool = True )
Parameters
forward
( pixel_values: typing.Optional[torch.Tensor] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or tuple(torch.FloatTensor)
Parameters
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or tuple(torch.FloatTensor)
last_hidden_state (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor
of shape (batch_size, hidden_size)
) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config: MobileViTConfig )
Parameters
MobileViT model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.
forward
Parameters
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
labels (torch.LongTensor
of shape (batch_size,)
, optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss). If config.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor
of shape (batch_size, config.num_labels)
) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the model at the output of each stage.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( config: MobileViTConfig )
Parameters
MobileViT model with a semantic segmentation head on top, e.g. for Pascal VOC.
forward
Parameters
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail.
labels (torch.LongTensor
of shape (batch_size, height, width)
, optional) — Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels > 1
, a classification loss is computed (Cross-Entropy).
Returns
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor
of shape (batch_size, config.num_labels, logits_height, logits_width)
) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values
passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, patch_size, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
( *args**kwargs )
Parameters
TensorFlow models and layers in transformers
accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit()
things should “just work” for you - just pass your inputs and labels in any format that model.fit()
supports! If, however, you want to use the second format outside of Keras methods like fit()
and predict()
, such as when creating your own layers or models with the Keras Functional
API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with pixel_values
only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask])
or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
call
Parameters
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
Returns
last_hidden_state (tf.Tensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor
of shape (batch_size, hidden_size)
) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of tf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of tf.Tensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( *args**kwargs )
Parameters
MobileViT model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.
TensorFlow models and layers in transformers
accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit()
things should “just work” for you - just pass your inputs and labels in any format that model.fit()
supports! If, however, you want to use the second format outside of Keras methods like fit()
and predict()
, such as when creating your own layers or models with the Keras Functional
API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with pixel_values
only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask])
or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
call
( pixel_values: tf.Tensor | None = Noneoutput_hidden_states: Optional[bool] = Nonelabels: tf.Tensor | None = Nonereturn_dict: Optional[bool] = Nonetraining: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention
or tuple(tf.Tensor)
Parameters
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
labels (tf.Tensor
of shape (batch_size,)
, optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss). If config.num_labels > 1
a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention
or tuple(tf.Tensor)
loss (tf.Tensor
of shape (1,)
, optional, returned when labels
is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor
of shape (batch_size, config.num_labels)
) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of tf.Tensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the model at the output of each stage.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
Copied
( *args**kwargs )
Parameters
MobileViT model with a semantic segmentation head on top, e.g. for Pascal VOC.
TensorFlow models and layers in transformers
accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit()
things should “just work” for you - just pass your inputs and labels in any format that model.fit()
supports! If, however, you want to use the second format outside of Keras methods like fit()
and predict()
, such as when creating your own layers or models with the Keras Functional
API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with pixel_values
only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask])
or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
call
( pixel_values: tf.Tensor | None = Nonelabels: tf.Tensor | None = Noneoutput_hidden_states: Optional[bool] = Nonereturn_dict: Optional[bool] = Nonetraining: bool = False ) → transformers.modeling_tf_outputs.TFSemanticSegmenterOutputWithNoAttention
or tuple(tf.Tensor)
Parameters
output_hidden_states (bool
, optional) — Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
labels (tf.Tensor
of shape (batch_size, height, width)
, optional) — Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels > 1
, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSemanticSegmenterOutputWithNoAttention
or tuple(tf.Tensor)
loss (tf.Tensor
of shape (1,)
, optional, returned when labels
is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor
of shape (batch_size, config.num_labels, logits_height, logits_width)
) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values
passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(tf.Tensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of tf.Tensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, patch_size, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
Copied
This model was contributed by . The TensorFlow version of the model was contributed by . The original code and weights can be found .
is supported by this and .
See also:
This is the configuration class to store the configuration of a . It is used to instantiate a MobileViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MobileViT architecture.
Configuration objects inherit from and can be used to control the model outputs. Read the documentation from for more information.
outputs () — Raw outputs of the model.
Converts the output of into semantic segmentation maps. Only supports PyTorch.
outputs () — Raw outputs of the model.
Converts the output of into semantic segmentation maps. Only supports PyTorch.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
The bare MobileViT model outputting raw hidden-states without any specific head on top. This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using . See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( pixel_values: typing.Optional[torch.Tensor] = Noneoutput_hidden_states: typing.Optional[bool] = Nonelabels: typing.Optional[torch.Tensor] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using . See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This model is a PyTorch subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( pixel_values: typing.Optional[torch.Tensor] = Nonelabels: typing.Optional[torch.Tensor] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → or tuple(torch.FloatTensor)
pixel_values (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using . See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple.
or tuple(torch.FloatTensor)
A or a tuple of torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
The bare MobileViT model outputting raw hidden-states without any specific head on top. This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note that when creating models and layers with then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
( pixel_values: tf.Tensor | None = Noneoutput_hidden_states: Optional[bool] = Nonereturn_dict: Optional[bool] = Nonetraining: bool = False ) → or tuple(tf.Tensor)
pixel_values (np.ndarray
, tf.Tensor
, List[tf.Tensor]
, Dict[str, tf.Tensor]
or Dict[str, np.ndarray]
and each example must have the shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using . See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
or tuple(tf.Tensor)
A or a tuple of tf.Tensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note that when creating models and layers with then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
pixel_values (np.ndarray
, tf.Tensor
, List[tf.Tensor]
, Dict[str, tf.Tensor]
or Dict[str, np.ndarray]
and each example must have the shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using . See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
A transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention
or a tuple of tf.Tensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.
config () — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the method to load the model weights.
This model inherits from . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note that when creating models and layers with then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
pixel_values (np.ndarray
, tf.Tensor
, List[tf.Tensor]
, Dict[str, tf.Tensor]
or Dict[str, np.ndarray]
and each example must have the shape (batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using . See for details.
return_dict (bool
, optional) — Whether or not to return a instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
A transformers.modeling_tf_outputs.TFSemanticSegmenterOutputWithNoAttention
or a tuple of tf.Tensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the configuration () and inputs.
The forward method, overrides the __call__
special method.