FocalNet

FocalNet

Overview

The FocalNet model was proposed in Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. FocalNets completely replace self-attention (used in models like ViT and Swin) by a focal modulation mechanism for modeling token interactions in vision. The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation.

The abstract from the paper is the following:

We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3.

Tips:

This model was contributed by nielsr. The original code can be found here.

FocalNetConfig

class transformers.FocalNetConfig

<source>

( image_size = 224patch_size = 4num_channels = 3embed_dim = 96use_conv_embed = Falsehidden_sizes = [192, 384, 768, 768]depths = [2, 2, 6, 2]focal_levels = [2, 2, 2, 2]focal_windows = [3, 3, 3, 3]hidden_act = 'gelu'mlp_ratio = 4.0hidden_dropout_prob = 0.0drop_path_rate = 0.1use_layerscale = Falselayerscale_value = 0.0001use_post_layernorm = Falseuse_post_layernorm_in_modulation = Falsenormalize_modulator = Falseinitializer_range = 0.02layer_norm_eps = 1e-05encoder_stride = 32out_features = Noneout_indices = None**kwargs )

Parameters

  • image_size (int, optional, defaults to 224) — The size (resolution) of each image.

  • patch_size (int, optional, defaults to 4) — The size (resolution) of each patch in the embeddings layer.

  • num_channels (int, optional, defaults to 3) — The number of input channels.

  • embed_dim (int, optional, defaults to 96) — Dimensionality of patch embedding.

  • use_conv_embed (bool, optional, defaults to False) — Whether to use convolutional embedding. The authors noted that using convolutional embedding usually improve the performance, but it’s not used by default.

  • hidden_sizes (List[int], optional, defaults to [192, 384, 768, 768]) — Dimensionality (hidden size) at each stage.

  • depths (list(int), optional, defaults to [2, 2, 6, 2]) — Depth (number of layers) of each stage in the encoder.

  • focal_levels (list(int), optional, defaults to [2, 2, 2, 2]) — Number of focal levels in each layer of the respective stages in the encoder.

  • focal_windows (list(int), optional, defaults to [3, 3, 3, 3]) — Focal window size in each layer of the respective stages in the encoder.

  • hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

  • mlp_ratio (float, optional, defaults to 4.0) — Ratio of MLP hidden dimensionality to embedding dimensionality.

  • hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings and encoder.

  • drop_path_rate (float, optional, defaults to 0.1) — Stochastic depth rate.

  • use_layerscale (bool, optional, defaults to False) — Whether to use layer scale in the encoder.

  • layerscale_value (float, optional, defaults to 1e-4) — The initial value of the layer scale.

  • use_post_layernorm (bool, optional, defaults to False) — Whether to use post layer normalization in the encoder.

  • use_post_layernorm_in_modulation (bool, optional, defaults to False) — Whether to use post layer normalization in the modulation layer.

  • normalize_modulator (bool, optional, defaults to False) — Whether to normalize the modulator.

  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers.

  • encoder_stride (int, optional, defaults to 32) — Factor to increase the spatial resolution by in the decoder head for masked image modeling.

  • out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage.

  • out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage.

This is the configuration class to store the configuration of a FocalNetModel. It is used to instantiate a FocalNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FocalNet microsoft/focalnet-tiny architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

Copied

>>> from transformers import FocalNetConfig, FocalNetModel

>>> # Initializing a FocalNet microsoft/focalnet-tiny style configuration
>>> configuration = FocalNetConfig()

>>> # Initializing a model (with random weights) from the microsoft/focalnet-tiny style configuration
>>> model = FocalNetModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

FocalNetModel

class transformers.FocalNetModel

<source>

( configadd_pooling_layer = Trueuse_mask_token = False )

Parameters

  • config (FocalNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare FocalNet Model outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

<source>

( pixel_values: typing.Optional[torch.FloatTensor] = Nonebool_masked_pos: typing.Optional[torch.BoolTensor] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

  • bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).

Returns

transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or tuple(torch.FloatTensor)

A transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FocalNetConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions.

The FocalNetModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

>>> from transformers import AutoImageProcessor, FocalNetModel
>>> import torch
>>> from datasets import load_dataset

>>> dataset = load_dataset("boincai/cats-image")
>>> image = dataset["test"]["image"][0]

>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-tiny")
>>> model = FocalNetModel.from_pretrained("microsoft/focalnet-tiny")

>>> inputs = image_processor(image, return_tensors="pt")

>>> with torch.no_grad():
...     outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 49, 768]

FocalNetForMaskedImageModeling

class transformers.FocalNetForMaskedImageModeling

<source>

( config )

Parameters

  • config (FocalNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

FocalNet Model with a decoder on top for masked image modeling.

This follows the same implementation as in SimMIM.

Note that we provide a script to pre-train this model on custom data in our examples directory.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

<source>

( pixel_values: typing.Optional[torch.FloatTensor] = Nonebool_masked_pos: typing.Optional[torch.BoolTensor] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

  • bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).

Returns

transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or tuple(torch.FloatTensor)

A transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FocalNetConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Masked image modeling (MLM) loss.

  • reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed pixel values.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions.

The FocalNetForMaskedImageModeling forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

Copied

>>> from transformers import AutoImageProcessor, FocalNetConfig, FocalNetForMaskedImageModeling
>>> import torch
>>> from PIL import Image
>>> import requests

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-base-simmim-window6-192")
>>> config = FocalNetConfig()
>>> model = FocalNetForMaskedImageModeling(config)

>>> num_patches = (model.config.image_size // model.config.patch_size) ** 2
>>> pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
>>> # create random boolean mask of shape (batch_size, num_patches)
>>> bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()

>>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
>>> loss, reconstructed_pixel_values = outputs.loss, outputs.logits
>>> list(reconstructed_pixel_values.shape)
[1, 3, 192, 192]

FocalNetForImageClassification

class transformers.FocalNetForImageClassification

<source>

( config )

Parameters

  • config (FocalNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

FocalNet Model with an image classification head on top (a linear layer on top of the pooled output) e.g. for ImageNet.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

<source>

( pixel_values: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.LongTensor] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None ) → transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details.

  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

  • labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or tuple(torch.FloatTensor)

A transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FocalNetConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.

  • logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions.

The FocalNetForImageClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Copied

>>> from transformers import AutoImageProcessor, FocalNetForImageClassification
>>> import torch
>>> from datasets import load_dataset

>>> dataset = load_dataset("boincai/cats-image")
>>> image = dataset["test"]["image"][0]

>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-tiny")
>>> model = FocalNetForImageClassification.from_pretrained("microsoft/focalnet-tiny")

>>> inputs = image_processor(image, return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat

Last updated