# TFAutoModelForMaskedLM

#### TFAutoModelForMaskedLM

#### class transformers.TFAutoModelForMaskedLM

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/modeling_tf_auto.py#L610)

( \*args\*\*kwargs )

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the [from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) class method or the [from\_config()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_config) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

**from\_config**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/auto_factory.py#L417)

( \*\*kwargs )

Parameters

* **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig)) — The model class to instantiate is selected based on the configuration class:
  * [AlbertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/albert#transformers.AlbertConfig) configuration class: [TFAlbertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/albert#transformers.TFAlbertForMaskedLM) (ALBERT model)
  * [BertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bert#transformers.BertConfig) configuration class: [TFBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bert#transformers.TFBertForMaskedLM) (BERT model)
  * [CamembertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/camembert#transformers.CamembertConfig) configuration class: [TFCamembertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/camembert#transformers.TFCamembertForMaskedLM) (CamemBERT model)
  * [ConvBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convbert#transformers.ConvBertConfig) configuration class: [TFConvBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convbert#transformers.TFConvBertForMaskedLM) (ConvBERT model)
  * [DebertaConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta#transformers.DebertaConfig) configuration class: [TFDebertaForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta#transformers.TFDebertaForMaskedLM) (DeBERTa model)
  * [DebertaV2Config](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [TFDebertaV2ForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta-v2#transformers.TFDebertaV2ForMaskedLM) (DeBERTa-v2 model)
  * [DistilBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/distilbert#transformers.DistilBertConfig) configuration class: [TFDistilBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/distilbert#transformers.TFDistilBertForMaskedLM) (DistilBERT model)
  * [ElectraConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/electra#transformers.ElectraConfig) configuration class: [TFElectraForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/electra#transformers.TFElectraForMaskedLM) (ELECTRA model)
  * [EsmConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/esm#transformers.EsmConfig) configuration class: [TFEsmForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/esm#transformers.TFEsmForMaskedLM) (ESM model)
  * [FlaubertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/flaubert#transformers.FlaubertConfig) configuration class: [TFFlaubertWithLMHeadModel](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/flaubert#transformers.TFFlaubertWithLMHeadModel) (FlauBERT model)
  * [FunnelConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/funnel#transformers.FunnelConfig) configuration class: [TFFunnelForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/funnel#transformers.TFFunnelForMaskedLM) (Funnel Transformer model)
  * [LayoutLMConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlm#transformers.LayoutLMConfig) configuration class: [TFLayoutLMForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlm#transformers.TFLayoutLMForMaskedLM) (LayoutLM model)
  * [LongformerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/longformer#transformers.LongformerConfig) configuration class: [TFLongformerForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/longformer#transformers.TFLongformerForMaskedLM) (Longformer model)
  * [MPNetConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mpnet#transformers.MPNetConfig) configuration class: [TFMPNetForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mpnet#transformers.TFMPNetForMaskedLM) (MPNet model)
  * [MobileBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilebert#transformers.MobileBertConfig) configuration class: [TFMobileBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilebert#transformers.TFMobileBertForMaskedLM) (MobileBERT model)
  * [RemBertConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/rembert#transformers.RemBertConfig) configuration class: [TFRemBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/rembert#transformers.TFRemBertForMaskedLM) (RemBERT model)
  * [RoFormerConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roformer#transformers.RoFormerConfig) configuration class: [TFRoFormerForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roformer#transformers.TFRoFormerForMaskedLM) (RoFormer model)
  * [RobertaConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta#transformers.RobertaConfig) configuration class: [TFRobertaForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta#transformers.TFRobertaForMaskedLM) (RoBERTa model)
  * [RobertaPreLayerNormConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta-prelayernorm#transformers.RobertaPreLayerNormConfig) configuration class: [TFRobertaPreLayerNormForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta-prelayernorm#transformers.TFRobertaPreLayerNormForMaskedLM) (RoBERTa-PreLayerNorm model)
  * [TapasConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/tapas#transformers.TapasConfig) configuration class: [TFTapasForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/tapas#transformers.TFTapasForMaskedLM) (TAPAS model)
  * [XLMConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm#transformers.XLMConfig) configuration class: [TFXLMWithLMHeadModel](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm#transformers.TFXLMWithLMHeadModel) (XLM model)
  * [XLMRobertaConfig](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig) configuration class: [TFXLMRobertaForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMaskedLM) (XLM-RoBERTa model)

Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.

Note: Loading a model from its configuration file does **not** load the model weights. It only affects the model’s configuration. Use [from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) to load the model weights.

Examples:

Copied

```
>>> from transformers import AutoConfig, TFAutoModelForMaskedLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("bert-base-cased")
>>> model = TFAutoModelForMaskedLM.from_config(config)
```

**from\_pretrained**

[\<source>](https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/models/auto/auto_factory.py#L448)

( \*model\_args\*\*kwargs )

Parameters

* **pretrained\_model\_name\_or\_path** (`str` or `os.PathLike`) — Can be either:
  * A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`.
  * A path to a *directory* containing model weights saved using [save\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.
  * A path or url to a *PyTorch state\_dict save file* (e.g, `./pt_model/pytorch_model.bin`). In this case, `from_pt` should be set to `True` and a configuration object should be provided as `config` argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
* **model\_args** (additional positional arguments, *optional*) — Will be passed along to the underlying model `__init__()` method.
* **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig), *optional*) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
  * The model is a model provided by the library (loaded with the *model id* string of a pretrained model).
  * The model was saved using [save\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory.
  * The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.
* **cache\_dir** (`str` or `os.PathLike`, *optional*) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
* **from\_pt** (`bool`, *optional*, defaults to `False`) — Load the model weights from a PyTorch checkpoint save file (see docstring of `pretrained_model_name_or_path` argument).
* **force\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
* **resume\_download** (`bool`, *optional*, defaults to `False`) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.
* **proxies** (`Dict[str, str]`, *optional*) — A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
* **output\_loading\_info(`bool`,** *optional*, defaults to `False`) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
* **local\_files\_only(`bool`,** *optional*, defaults to `False`) — Whether or not to only look at local files (e.g., not try downloading the model).
* **revision** (`str`, *optional*, defaults to `"main"`) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
* **trust\_remote\_code** (`bool`, *optional*, defaults to `False`) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
* **code\_revision** (`str`, *optional*, defaults to `"main"`) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
* **kwargs** (additional keyword arguments, *optional*) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:
  * If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model’s `__init__` method (we assume all relevant updates to the configuration have already been done)
  * If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from\_pretrained()](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/configuration#transformers.PretrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s `__init__` function.

Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it’s missing, by falling back to using pattern matching on `pretrained_model_name_or_path`:

* **albert** — [TFAlbertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/albert#transformers.TFAlbertForMaskedLM) (ALBERT model)
* **bert** — [TFBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/bert#transformers.TFBertForMaskedLM) (BERT model)
* **camembert** — [TFCamembertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/camembert#transformers.TFCamembertForMaskedLM) (CamemBERT model)
* **convbert** — [TFConvBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/convbert#transformers.TFConvBertForMaskedLM) (ConvBERT model)
* **deberta** — [TFDebertaForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta#transformers.TFDebertaForMaskedLM) (DeBERTa model)
* **deberta-v2** — [TFDebertaV2ForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/deberta-v2#transformers.TFDebertaV2ForMaskedLM) (DeBERTa-v2 model)
* **distilbert** — [TFDistilBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/distilbert#transformers.TFDistilBertForMaskedLM) (DistilBERT model)
* **electra** — [TFElectraForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/electra#transformers.TFElectraForMaskedLM) (ELECTRA model)
* **esm** — [TFEsmForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/esm#transformers.TFEsmForMaskedLM) (ESM model)
* **flaubert** — [TFFlaubertWithLMHeadModel](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/flaubert#transformers.TFFlaubertWithLMHeadModel) (FlauBERT model)
* **funnel** — [TFFunnelForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/funnel#transformers.TFFunnelForMaskedLM) (Funnel Transformer model)
* **layoutlm** — [TFLayoutLMForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/layoutlm#transformers.TFLayoutLMForMaskedLM) (LayoutLM model)
* **longformer** — [TFLongformerForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/longformer#transformers.TFLongformerForMaskedLM) (Longformer model)
* **mobilebert** — [TFMobileBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mobilebert#transformers.TFMobileBertForMaskedLM) (MobileBERT model)
* **mpnet** — [TFMPNetForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/mpnet#transformers.TFMPNetForMaskedLM) (MPNet model)
* **rembert** — [TFRemBertForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/rembert#transformers.TFRemBertForMaskedLM) (RemBERT model)
* **roberta** — [TFRobertaForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta#transformers.TFRobertaForMaskedLM) (RoBERTa model)
* **roberta-prelayernorm** — [TFRobertaPreLayerNormForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roberta-prelayernorm#transformers.TFRobertaPreLayerNormForMaskedLM) (RoBERTa-PreLayerNorm model)
* **roformer** — [TFRoFormerForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/roformer#transformers.TFRoFormerForMaskedLM) (RoFormer model)
* **tapas** — [TFTapasForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/tapas#transformers.TFTapasForMaskedLM) (TAPAS model)
* **xlm** — [TFXLMWithLMHeadModel](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm#transformers.TFXLMWithLMHeadModel) (XLM model)
* **xlm-roberta** — [TFXLMRobertaForMaskedLM](https://huggingface.co/docs/transformers/v4.34.1/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMaskedLM) (XLM-RoBERTa model)

Examples:

Copied

```
>>> from transformers import AutoConfig, TFAutoModelForMaskedLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMaskedLM.from_pretrained("bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModelForMaskedLM.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMaskedLM.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
```
