# TFAutoModelForTokenClassification

#### TFAutoModelForTokenClassification

#### class transformers.TFAutoModelForTokenClassification

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/modeling_tf_auto.py#L664)

( \*args\*\*kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the [from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) class method or the [from\_config()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_config) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

**from\_config**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/auto_factory.py#L417)

( \*\*kwargs )

Parameters

* **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig)) — The model class to instantiate is selected based on the configuration class:
  * [AlbertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/albert#transformers.AlbertConfig) configuration class: [TFAlbertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/albert#transformers.TFAlbertForTokenClassification) (ALBERT model)
  * [BertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bert#transformers.BertConfig) configuration class: [TFBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bert#transformers.TFBertForTokenClassification) (BERT model)
  * [CamembertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/camembert#transformers.CamembertConfig) configuration class: [TFCamembertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/camembert#transformers.TFCamembertForTokenClassification) (CamemBERT model)
  * [ConvBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convbert#transformers.ConvBertConfig) configuration class: [TFConvBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convbert#transformers.TFConvBertForTokenClassification) (ConvBERT model)
  * [DebertaConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta#transformers.DebertaConfig) configuration class: [TFDebertaForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta#transformers.TFDebertaForTokenClassification) (DeBERTa model)
  * [DebertaV2Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [TFDebertaV2ForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta-v2#transformers.TFDebertaV2ForTokenClassification) (DeBERTa-v2 model)
  * [DistilBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/distilbert#transformers.DistilBertConfig) configuration class: [TFDistilBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/distilbert#transformers.TFDistilBertForTokenClassification) (DistilBERT model)
  * [ElectraConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/electra#transformers.ElectraConfig) configuration class: [TFElectraForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/electra#transformers.TFElectraForTokenClassification) (ELECTRA model)
  * [EsmConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/esm#transformers.EsmConfig) configuration class: [TFEsmForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/esm#transformers.TFEsmForTokenClassification) (ESM model)
  * [FlaubertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/flaubert#transformers.FlaubertConfig) configuration class: [TFFlaubertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/flaubert#transformers.TFFlaubertForTokenClassification) (FlauBERT model)
  * [FunnelConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/funnel#transformers.FunnelConfig) configuration class: [TFFunnelForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/funnel#transformers.TFFunnelForTokenClassification) (Funnel Transformer model)
  * [LayoutLMConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlm#transformers.LayoutLMConfig) configuration class: [TFLayoutLMForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlm#transformers.TFLayoutLMForTokenClassification) (LayoutLM model)
  * [LayoutLMv3Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlmv3#transformers.LayoutLMv3Config) configuration class: [TFLayoutLMv3ForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlmv3#transformers.TFLayoutLMv3ForTokenClassification) (LayoutLMv3 model)
  * [LongformerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/longformer#transformers.LongformerConfig) configuration class: [TFLongformerForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/longformer#transformers.TFLongformerForTokenClassification) (Longformer model)
  * [MPNetConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mpnet#transformers.MPNetConfig) configuration class: [TFMPNetForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mpnet#transformers.TFMPNetForTokenClassification) (MPNet model)
  * [MobileBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilebert#transformers.MobileBertConfig) configuration class: [TFMobileBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilebert#transformers.TFMobileBertForTokenClassification) (MobileBERT model)
  * [RemBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/rembert#transformers.RemBertConfig) configuration class: [TFRemBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/rembert#transformers.TFRemBertForTokenClassification) (RemBERT model)
  * [RoFormerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roformer#transformers.RoFormerConfig) configuration class: [TFRoFormerForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roformer#transformers.TFRoFormerForTokenClassification) (RoFormer model)
  * [RobertaConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta#transformers.RobertaConfig) configuration class: [TFRobertaForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta#transformers.TFRobertaForTokenClassification) (RoBERTa model)
  * [RobertaPreLayerNormConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta-prelayernorm#transformers.RobertaPreLayerNormConfig) configuration class: [TFRobertaPreLayerNormForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta-prelayernorm#transformers.TFRobertaPreLayerNormForTokenClassification) (RoBERTa-PreLayerNorm model)
  * [XLMConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm#transformers.XLMConfig) configuration class: [TFXLMForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm#transformers.TFXLMForTokenClassification) (XLM model)
  * [XLMRobertaConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig) configuration class: [TFXLMRobertaForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForTokenClassification) (XLM-RoBERTa model)
  * [XLNetConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlnet#transformers.XLNetConfig) configuration class: [TFXLNetForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlnet#transformers.TFXLNetForTokenClassification) (XLNet model)

Instantiates one of the model classes of the library (with a token classification head) from a configuration.

Note: Loading a model from its configuration file does **not** load the model weights. It only affects the model’s configuration. Use [from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) to load the model weights.

Examples:

Copied

```
>>> from transformers import AutoConfig, TFAutoModelForTokenClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("bert-base-cased")
>>> model = TFAutoModelForTokenClassification.from_config(config)
```

**from\_pretrained**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/auto_factory.py#L448)

( \*model\_args\*\*kwargs )

Parameters

* **pretrained\_model\_name\_or\_path** (`str` or `os.PathLike`) — Can be either:
  * A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`.
  * A path to a *directory* containing model weights saved using [save\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.
  * A path or url to a *PyTorch state\_dict save file* (e.g, `./pt_model/pytorch_model.bin`). In this case, `from_pt` should be set to `True` and a configuration object should be provided as `config` argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
* **model\_args** (additional positional arguments, *optional*) — Will be passed along to the underlying model `__init__()` method.
* **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig), *optional*) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
  * The model is a model provided by the library (loaded with the *model id* string of a pretrained model).
  * The model was saved using [save\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory.
  * The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.
* **cache\_dir** (`str` or `os.PathLike`, *optional*) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
* **from\_pt** (`bool`, *optional*, defaults to `False`) — Load the model weights from a PyTorch checkpoint save file (see docstring of `pretrained_model_name_or_path` argument).
* **force\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
* **resume\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.
* **proxies** (`Dict[str, str]`, *optional*) — A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
* **output\_loading\_info(`bool`,** *optional*, defaults to `False`) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
* **local\_files\_only(`bool`,** *optional*, defaults to `False`) — Whether or not to only look at local files (e.g., not try downloading the model).
* **revision** (`str`, *optional*, defaults to `"main"`) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
* **trust\_remote\_code** (`bool`, *optional*, defaults to `False`) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
* **code\_revision** (`str`, *optional*, defaults to `"main"`) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
* **kwargs** (additional keyword arguments, *optional*) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:
  * If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model’s `__init__` method (we assume all relevant updates to the configuration have already been done)
  * If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s `__init__` function.

Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it’s missing, by falling back to using pattern matching on `pretrained_model_name_or_path`:

* **albert** — [TFAlbertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/albert#transformers.TFAlbertForTokenClassification) (ALBERT model)
* **bert** — [TFBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bert#transformers.TFBertForTokenClassification) (BERT model)
* **camembert** — [TFCamembertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/camembert#transformers.TFCamembertForTokenClassification) (CamemBERT model)
* **convbert** — [TFConvBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convbert#transformers.TFConvBertForTokenClassification) (ConvBERT model)
* **deberta** — [TFDebertaForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta#transformers.TFDebertaForTokenClassification) (DeBERTa model)
* **deberta-v2** — [TFDebertaV2ForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta-v2#transformers.TFDebertaV2ForTokenClassification) (DeBERTa-v2 model)
* **distilbert** — [TFDistilBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/distilbert#transformers.TFDistilBertForTokenClassification) (DistilBERT model)
* **electra** — [TFElectraForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/electra#transformers.TFElectraForTokenClassification) (ELECTRA model)
* **esm** — [TFEsmForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/esm#transformers.TFEsmForTokenClassification) (ESM model)
* **flaubert** — [TFFlaubertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/flaubert#transformers.TFFlaubertForTokenClassification) (FlauBERT model)
* **funnel** — [TFFunnelForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/funnel#transformers.TFFunnelForTokenClassification) (Funnel Transformer model)
* **layoutlm** — [TFLayoutLMForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlm#transformers.TFLayoutLMForTokenClassification) (LayoutLM model)
* **layoutlmv3** — [TFLayoutLMv3ForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlmv3#transformers.TFLayoutLMv3ForTokenClassification) (LayoutLMv3 model)
* **longformer** — [TFLongformerForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/longformer#transformers.TFLongformerForTokenClassification) (Longformer model)
* **mobilebert** — [TFMobileBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilebert#transformers.TFMobileBertForTokenClassification) (MobileBERT model)
* **mpnet** — [TFMPNetForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mpnet#transformers.TFMPNetForTokenClassification) (MPNet model)
* **rembert** — [TFRemBertForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/rembert#transformers.TFRemBertForTokenClassification) (RemBERT model)
* **roberta** — [TFRobertaForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta#transformers.TFRobertaForTokenClassification) (RoBERTa model)
* **roberta-prelayernorm** — [TFRobertaPreLayerNormForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta-prelayernorm#transformers.TFRobertaPreLayerNormForTokenClassification) (RoBERTa-PreLayerNorm model)
* **roformer** — [TFRoFormerForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roformer#transformers.TFRoFormerForTokenClassification) (RoFormer model)
* **xlm** — [TFXLMForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm#transformers.TFXLMForTokenClassification) (XLM model)
* **xlm-roberta** — [TFXLMRobertaForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForTokenClassification) (XLM-RoBERTa model)
* **xlnet** — [TFXLNetForTokenClassification](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlnet#transformers.TFXLNetForTokenClassification) (XLNet model)

Examples:

Copied

```
>>> from transformers import AutoConfig, TFAutoModelForTokenClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForTokenClassification.from_pretrained("bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForTokenClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForTokenClassification.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
```
