AutoModelForMaskedLM
AutoModelForMaskedLM
class transformers.AutoModelForMaskedLM
( *args**kwargs )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
from_config
( **kwargs )
Parameters
config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class:
AlbertConfig configuration class: AlbertForMaskedLM (ALBERT model)
BartConfig configuration class: BartForConditionalGeneration (BART model)
BertConfig configuration class: BertForMaskedLM (BERT model)
BigBirdConfig configuration class: BigBirdForMaskedLM (BigBird model)
CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model)
ConvBertConfig configuration class: ConvBertForMaskedLM (ConvBERT model)
Data2VecTextConfig configuration class: Data2VecTextForMaskedLM (Data2VecText model)
DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model)
DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model)
DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model)
ElectraConfig configuration class: ElectraForMaskedLM (ELECTRA model)
ErnieConfig configuration class: ErnieForMaskedLM (ERNIE model)
EsmConfig configuration class: EsmForMaskedLM (ESM model)
FNetConfig configuration class: FNetForMaskedLM (FNet model)
FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model)
FunnelConfig configuration class: FunnelForMaskedLM (Funnel Transformer model)
IBertConfig configuration class: IBertForMaskedLM (I-BERT model)
LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model)
LongformerConfig configuration class: LongformerForMaskedLM (Longformer model)
LukeConfig configuration class: LukeForMaskedLM (LUKE model)
MBartConfig configuration class: MBartForConditionalGeneration (mBART model)
MPNetConfig configuration class: MPNetForMaskedLM (MPNet model)
MegaConfig configuration class: MegaForMaskedLM (MEGA model)
MegatronBertConfig configuration class: MegatronBertForMaskedLM (Megatron-BERT model)
MobileBertConfig configuration class: MobileBertForMaskedLM (MobileBERT model)
MraConfig configuration class: MraForMaskedLM (MRA model)
MvpConfig configuration class: MvpForConditionalGeneration (MVP model)
NezhaConfig configuration class: NezhaForMaskedLM (Nezha model)
NystromformerConfig configuration class: NystromformerForMaskedLM (Nyströmformer model)
PerceiverConfig configuration class: PerceiverForMaskedLM (Perceiver model)
QDQBertConfig configuration class: QDQBertForMaskedLM (QDQBert model)
ReformerConfig configuration class: ReformerForMaskedLM (Reformer model)
RemBertConfig configuration class: RemBertForMaskedLM (RemBERT model)
RoCBertConfig configuration class: RoCBertForMaskedLM (RoCBert model)
RoFormerConfig configuration class: RoFormerForMaskedLM (RoFormer model)
RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model)
RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model)
TapasConfig configuration class: TapasForMaskedLM (TAPAS model)
Wav2Vec2Config configuration class:
Wav2Vec2ForMaskedLM
(Wav2Vec2 model)XLMConfig configuration class: XLMWithLMHeadModel (XLM model)
XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model)
XLMRobertaXLConfig configuration class: XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model)
XmodConfig configuration class: XmodForMaskedLM (X-MOD model)
YosoConfig configuration class: YosoForMaskedLM (YOSO model)
Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
Examples:
Copied
from_pretrained
( *model_args**kwargs )
Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using save_pretrained(), e.g.,
./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (additional positional arguments, optional) — Will be passed along to the underlying model
__init__()
method.config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using save_pretrained() and is reloaded by supplying the save directory.
The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model).revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.code_revision (
str
, optional, defaults to"main"
) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
albert — AlbertForMaskedLM (ALBERT model)
bart — BartForConditionalGeneration (BART model)
bert — BertForMaskedLM (BERT model)
big_bird — BigBirdForMaskedLM (BigBird model)
camembert — CamembertForMaskedLM (CamemBERT model)
convbert — ConvBertForMaskedLM (ConvBERT model)
data2vec-text — Data2VecTextForMaskedLM (Data2VecText model)
deberta — DebertaForMaskedLM (DeBERTa model)
deberta-v2 — DebertaV2ForMaskedLM (DeBERTa-v2 model)
distilbert — DistilBertForMaskedLM (DistilBERT model)
electra — ElectraForMaskedLM (ELECTRA model)
ernie — ErnieForMaskedLM (ERNIE model)
esm — EsmForMaskedLM (ESM model)
flaubert — FlaubertWithLMHeadModel (FlauBERT model)
fnet — FNetForMaskedLM (FNet model)
funnel — FunnelForMaskedLM (Funnel Transformer model)
ibert — IBertForMaskedLM (I-BERT model)
layoutlm — LayoutLMForMaskedLM (LayoutLM model)
longformer — LongformerForMaskedLM (Longformer model)
luke — LukeForMaskedLM (LUKE model)
mbart — MBartForConditionalGeneration (mBART model)
mega — MegaForMaskedLM (MEGA model)
megatron-bert — MegatronBertForMaskedLM (Megatron-BERT model)
mobilebert — MobileBertForMaskedLM (MobileBERT model)
mpnet — MPNetForMaskedLM (MPNet model)
mra — MraForMaskedLM (MRA model)
mvp — MvpForConditionalGeneration (MVP model)
nezha — NezhaForMaskedLM (Nezha model)
nystromformer — NystromformerForMaskedLM (Nyströmformer model)
perceiver — PerceiverForMaskedLM (Perceiver model)
qdqbert — QDQBertForMaskedLM (QDQBert model)
reformer — ReformerForMaskedLM (Reformer model)
rembert — RemBertForMaskedLM (RemBERT model)
roberta — RobertaForMaskedLM (RoBERTa model)
roberta-prelayernorm — RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model)
roc_bert — RoCBertForMaskedLM (RoCBert model)
roformer — RoFormerForMaskedLM (RoFormer model)
squeezebert — SqueezeBertForMaskedLM (SqueezeBERT model)
tapas — TapasForMaskedLM (TAPAS model)
wav2vec2 —
Wav2Vec2ForMaskedLM
(Wav2Vec2 model)xlm — XLMWithLMHeadModel (XLM model)
xlm-roberta — XLMRobertaForMaskedLM (XLM-RoBERTa model)
xlm-roberta-xl — XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model)
xmod — XmodForMaskedLM (X-MOD model)
yoso — YosoForMaskedLM (YOSO model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
Copied
Last updated