# AutoModelForImageClassification

#### AutoModelForImageClassification

#### class transformers.AutoModelForImageClassification

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/modeling_auto.py#L1341)

( \*args\*\*kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the [from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) class method or the [from\_config()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_config) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

**from\_config**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/auto_factory.py#L417)

( \*\*kwargs )

Parameters

* **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig)) — The model class to instantiate is selected based on the configuration class:
  * [BeitConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/beit#transformers.BeitConfig) configuration class: [BeitForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/beit#transformers.BeitForImageClassification) (BEiT model)
  * [BitConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bit#transformers.BitConfig) configuration class: [BitForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bit#transformers.BitForImageClassification) (BiT model)
  * [ConvNextConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convnext#transformers.ConvNextConfig) configuration class: [ConvNextForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convnext#transformers.ConvNextForImageClassification) (ConvNeXT model)
  * [ConvNextV2Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convnextv2#transformers.ConvNextV2Config) configuration class: [ConvNextV2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convnextv2#transformers.ConvNextV2ForImageClassification) (ConvNeXTV2 model)
  * [CvtConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/cvt#transformers.CvtConfig) configuration class: [CvtForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/cvt#transformers.CvtForImageClassification) (CvT model)
  * [Data2VecVisionConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/data2vec#transformers.Data2VecVisionConfig) configuration class: [Data2VecVisionForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/data2vec#transformers.Data2VecVisionForImageClassification) (Data2VecVision model)
  * [DeiTConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deit#transformers.DeiTConfig) configuration class: [DeiTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deit#transformers.DeiTForImageClassification) or [DeiTForImageClassificationWithTeacher](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deit#transformers.DeiTForImageClassificationWithTeacher) (DeiT model)
  * [DinatConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/dinat#transformers.DinatConfig) configuration class: [DinatForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/dinat#transformers.DinatForImageClassification) (DiNAT model)
  * [Dinov2Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/dinov2#transformers.Dinov2Config) configuration class: [Dinov2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/dinov2#transformers.Dinov2ForImageClassification) (DINOv2 model)
  * [EfficientFormerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientformer#transformers.EfficientFormerConfig) configuration class: [EfficientFormerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientformer#transformers.EfficientFormerForImageClassification) or [EfficientFormerForImageClassificationWithTeacher](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientformer#transformers.EfficientFormerForImageClassificationWithTeacher) (EfficientFormer model)
  * [EfficientNetConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientnet#transformers.EfficientNetConfig) configuration class: [EfficientNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientnet#transformers.EfficientNetForImageClassification) (EfficientNet model)
  * [FocalNetConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/focalnet#transformers.FocalNetConfig) configuration class: [FocalNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/focalnet#transformers.FocalNetForImageClassification) (FocalNet model)
  * [ImageGPTConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/imagegpt#transformers.ImageGPTConfig) configuration class: [ImageGPTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/imagegpt#transformers.ImageGPTForImageClassification) (ImageGPT model)
  * [LevitConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/levit#transformers.LevitConfig) configuration class: [LevitForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/levit#transformers.LevitForImageClassification) or [LevitForImageClassificationWithTeacher](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/levit#transformers.LevitForImageClassificationWithTeacher) (LeViT model)
  * [MobileNetV1Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilenet_v1#transformers.MobileNetV1Config) configuration class: [MobileNetV1ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilenet_v1#transformers.MobileNetV1ForImageClassification) (MobileNetV1 model)
  * [MobileNetV2Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilenet_v2#transformers.MobileNetV2Config) configuration class: [MobileNetV2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilenet_v2#transformers.MobileNetV2ForImageClassification) (MobileNetV2 model)
  * [MobileViTConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilevit#transformers.MobileViTConfig) configuration class: [MobileViTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilevit#transformers.MobileViTForImageClassification) (MobileViT model)
  * [MobileViTV2Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilevitv2#transformers.MobileViTV2Config) configuration class: [MobileViTV2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilevitv2#transformers.MobileViTV2ForImageClassification) (MobileViTV2 model)
  * [NatConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/nat#transformers.NatConfig) configuration class: [NatForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/nat#transformers.NatForImageClassification) (NAT model)
  * [PerceiverConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/perceiver#transformers.PerceiverConfig) configuration class: [PerceiverForImageClassificationLearned](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/perceiver#transformers.PerceiverForImageClassificationLearned) or [PerceiverForImageClassificationFourier](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/perceiver#transformers.PerceiverForImageClassificationFourier) or [PerceiverForImageClassificationConvProcessing](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/perceiver#transformers.PerceiverForImageClassificationConvProcessing) (Perceiver model)
  * [PoolFormerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/poolformer#transformers.PoolFormerConfig) configuration class: [PoolFormerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/poolformer#transformers.PoolFormerForImageClassification) (PoolFormer model)
  * [PvtConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/pvt#transformers.PvtConfig) configuration class: [PvtForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/pvt#transformers.PvtForImageClassification) (PVT model)
  * [RegNetConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/regnet#transformers.RegNetConfig) configuration class: [RegNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/regnet#transformers.RegNetForImageClassification) (RegNet model)
  * [ResNetConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/resnet#transformers.ResNetConfig) configuration class: [ResNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/resnet#transformers.ResNetForImageClassification) (ResNet model)
  * [SegformerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/segformer#transformers.SegformerConfig) configuration class: [SegformerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/segformer#transformers.SegformerForImageClassification) (SegFormer model)
  * [SwiftFormerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swiftformer#transformers.SwiftFormerConfig) configuration class: [SwiftFormerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swiftformer#transformers.SwiftFormerForImageClassification) (SwiftFormer model)
  * [SwinConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swin#transformers.SwinConfig) configuration class: [SwinForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swin#transformers.SwinForImageClassification) (Swin Transformer model)
  * [Swinv2Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swinv2#transformers.Swinv2Config) configuration class: [Swinv2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swinv2#transformers.Swinv2ForImageClassification) (Swin Transformer V2 model)
  * [VanConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/van#transformers.VanConfig) configuration class: [VanForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/van#transformers.VanForImageClassification) (VAN model)
  * [ViTConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit#transformers.ViTConfig) configuration class: [ViTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit#transformers.ViTForImageClassification) (ViT model)
  * [ViTHybridConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit_hybrid#transformers.ViTHybridConfig) configuration class: [ViTHybridForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit_hybrid#transformers.ViTHybridForImageClassification) (ViT Hybrid model)
  * [ViTMSNConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit_msn#transformers.ViTMSNConfig) configuration class: [ViTMSNForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit_msn#transformers.ViTMSNForImageClassification) (ViTMSN model)

Instantiates one of the model classes of the library (with a image classification head) from a configuration.

Note: Loading a model from its configuration file does **not** load the model weights. It only affects the model’s configuration. Use [from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) to load the model weights.

Examples:

Copied

```
>>> from transformers import AutoConfig, AutoModelForImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("bert-base-cased")
>>> model = AutoModelForImageClassification.from_config(config)
```

**from\_pretrained**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/auto_factory.py#L448)

( \*model\_args\*\*kwargs )

Parameters

* **pretrained\_model\_name\_or\_path** (`str` or `os.PathLike`) — Can be either:
  * A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`.
  * A path to a *directory* containing model weights saved using [save\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.
  * A path or url to a *tensorflow index checkpoint file* (e.g, `./tf_model/model.ckpt.index`). In this case, `from_tf` should be set to `True` and a configuration object should be provided as `config` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
* **model\_args** (additional positional arguments, *optional*) — Will be passed along to the underlying model `__init__()` method.
* **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig), *optional*) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
  * The model is a model provided by the library (loaded with the *model id* string of a pretrained model).
  * The model was saved using [save\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory.
  * The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.
* **state\_dict** (*Dict\[str, torch.Tensor]*, *optional*) — A state dictionary to use instead of a state dictionary loaded from saved weights file.

  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.
* **cache\_dir** (`str` or `os.PathLike`, *optional*) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
* **from\_tf** (`bool`, *optional*, defaults to `False`) — Load the model weights from a TensorFlow checkpoint save file (see docstring of `pretrained_model_name_or_path` argument).
* **force\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
* **resume\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.
* **proxies** (`Dict[str, str]`, *optional*) — A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
* **output\_loading\_info(`bool`,** *optional*, defaults to `False`) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
* **local\_files\_only(`bool`,** *optional*, defaults to `False`) — Whether or not to only look at local files (e.g., not try downloading the model).
* **revision** (`str`, *optional*, defaults to `"main"`) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
* **trust\_remote\_code** (`bool`, *optional*, defaults to `False`) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
* **code\_revision** (`str`, *optional*, defaults to `"main"`) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
* **kwargs** (additional keyword arguments, *optional*) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:
  * If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model’s `__init__` method (we assume all relevant updates to the configuration have already been done)
  * If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s `__init__` function.

Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it’s missing, by falling back to using pattern matching on `pretrained_model_name_or_path`:

* **beit** — [BeitForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/beit#transformers.BeitForImageClassification) (BEiT model)
* **bit** — [BitForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bit#transformers.BitForImageClassification) (BiT model)
* **convnext** — [ConvNextForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convnext#transformers.ConvNextForImageClassification) (ConvNeXT model)
* **convnextv2** — [ConvNextV2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convnextv2#transformers.ConvNextV2ForImageClassification) (ConvNeXTV2 model)
* **cvt** — [CvtForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/cvt#transformers.CvtForImageClassification) (CvT model)
* **data2vec-vision** — [Data2VecVisionForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/data2vec#transformers.Data2VecVisionForImageClassification) (Data2VecVision model)
* **deit** — [DeiTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deit#transformers.DeiTForImageClassification) or [DeiTForImageClassificationWithTeacher](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deit#transformers.DeiTForImageClassificationWithTeacher) (DeiT model)
* **dinat** — [DinatForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/dinat#transformers.DinatForImageClassification) (DiNAT model)
* **dinov2** — [Dinov2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/dinov2#transformers.Dinov2ForImageClassification) (DINOv2 model)
* **efficientformer** — [EfficientFormerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientformer#transformers.EfficientFormerForImageClassification) or [EfficientFormerForImageClassificationWithTeacher](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientformer#transformers.EfficientFormerForImageClassificationWithTeacher) (EfficientFormer model)
* **efficientnet** — [EfficientNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/efficientnet#transformers.EfficientNetForImageClassification) (EfficientNet model)
* **focalnet** — [FocalNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/focalnet#transformers.FocalNetForImageClassification) (FocalNet model)
* **imagegpt** — [ImageGPTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/imagegpt#transformers.ImageGPTForImageClassification) (ImageGPT model)
* **levit** — [LevitForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/levit#transformers.LevitForImageClassification) or [LevitForImageClassificationWithTeacher](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/levit#transformers.LevitForImageClassificationWithTeacher) (LeViT model)
* **mobilenet\_v1** — [MobileNetV1ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilenet_v1#transformers.MobileNetV1ForImageClassification) (MobileNetV1 model)
* **mobilenet\_v2** — [MobileNetV2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilenet_v2#transformers.MobileNetV2ForImageClassification) (MobileNetV2 model)
* **mobilevit** — [MobileViTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilevit#transformers.MobileViTForImageClassification) (MobileViT model)
* **mobilevitv2** — [MobileViTV2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilevitv2#transformers.MobileViTV2ForImageClassification) (MobileViTV2 model)
* **nat** — [NatForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/nat#transformers.NatForImageClassification) (NAT model)
* **perceiver** — [PerceiverForImageClassificationLearned](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/perceiver#transformers.PerceiverForImageClassificationLearned) or [PerceiverForImageClassificationFourier](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/perceiver#transformers.PerceiverForImageClassificationFourier) or [PerceiverForImageClassificationConvProcessing](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/perceiver#transformers.PerceiverForImageClassificationConvProcessing) (Perceiver model)
* **poolformer** — [PoolFormerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/poolformer#transformers.PoolFormerForImageClassification) (PoolFormer model)
* **pvt** — [PvtForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/pvt#transformers.PvtForImageClassification) (PVT model)
* **regnet** — [RegNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/regnet#transformers.RegNetForImageClassification) (RegNet model)
* **resnet** — [ResNetForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/resnet#transformers.ResNetForImageClassification) (ResNet model)
* **segformer** — [SegformerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/segformer#transformers.SegformerForImageClassification) (SegFormer model)
* **swiftformer** — [SwiftFormerForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swiftformer#transformers.SwiftFormerForImageClassification) (SwiftFormer model)
* **swin** — [SwinForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swin#transformers.SwinForImageClassification) (Swin Transformer model)
* **swinv2** — [Swinv2ForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/swinv2#transformers.Swinv2ForImageClassification) (Swin Transformer V2 model)
* **van** — [VanForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/van#transformers.VanForImageClassification) (VAN model)
* **vit** — [ViTForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit#transformers.ViTForImageClassification) (ViT model)
* **vit\_hybrid** — [ViTHybridForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit_hybrid#transformers.ViTHybridForImageClassification) (ViT Hybrid model)
* **vit\_msn** — [ViTMSNForImageClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/vit_msn#transformers.ViTMSNForImageClassification) (ViTMSN model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

Copied

```
>>> from transformers import AutoConfig, AutoModelForImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageClassification.from_pretrained("bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForImageClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForImageClassification.from_pretrained(
...     "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
```
