Transformers
  • 🌍GET STARTED
    • Transformers
    • Quick tour
    • Installation
  • 🌍TUTORIALS
    • Run inference with pipelines
    • Write portable code with AutoClass
    • Preprocess data
    • Fine-tune a pretrained model
    • Train with a script
    • Set up distributed training with BOINC AI Accelerate
    • Load and train adapters with BOINC AI PEFT
    • Share your model
    • Agents
    • Generation with LLMs
  • 🌍TASK GUIDES
    • 🌍NATURAL LANGUAGE PROCESSING
      • Text classification
      • Token classification
      • Question answering
      • Causal language modeling
      • Masked language modeling
      • Translation
      • Summarization
      • Multiple choice
    • 🌍AUDIO
      • Audio classification
      • Automatic speech recognition
    • 🌍COMPUTER VISION
      • Image classification
      • Semantic segmentation
      • Video classification
      • Object detection
      • Zero-shot object detection
      • Zero-shot image classification
      • Depth estimation
    • 🌍MULTIMODAL
      • Image captioning
      • Document Question Answering
      • Visual Question Answering
      • Text to speech
    • 🌍GENERATION
      • Customize the generation strategy
    • 🌍PROMPTING
      • Image tasks with IDEFICS
  • 🌍DEVELOPER GUIDES
    • Use fast tokenizers from BOINC AI Tokenizers
    • Run inference with multilingual models
    • Use model-specific APIs
    • Share a custom model
    • Templates for chat models
    • Run training on Amazon SageMaker
    • Export to ONNX
    • Export to TFLite
    • Export to TorchScript
    • Benchmarks
    • Notebooks with examples
    • Community resources
    • Custom Tools and Prompts
    • Troubleshoot
  • 🌍PERFORMANCE AND SCALABILITY
    • Overview
    • 🌍EFFICIENT TRAINING TECHNIQUES
      • Methods and tools for efficient training on a single GPU
      • Multiple GPUs and parallelism
      • Efficient training on CPU
      • Distributed CPU training
      • Training on TPUs
      • Training on TPU with TensorFlow
      • Training on Specialized Hardware
      • Custom hardware for training
      • Hyperparameter Search using Trainer API
    • 🌍OPTIMIZING INFERENCE
      • Inference on CPU
      • Inference on one GPU
      • Inference on many GPUs
      • Inference on Specialized Hardware
    • Instantiating a big model
    • Troubleshooting
    • XLA Integration for TensorFlow Models
    • Optimize inference using `torch.compile()`
  • 🌍CONTRIBUTE
    • How to contribute to transformers?
    • How to add a model to BOINC AI Transformers?
    • How to convert a BOINC AI Transformers model to TensorFlow?
    • How to add a pipeline to BOINC AI Transformers?
    • Testing
    • Checks on a Pull Request
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Glossary
    • What BOINC AI Transformers can do
    • How BOINC AI Transformers solve tasks
    • The Transformer model family
    • Summary of the tokenizers
    • Attention mechanisms
    • Padding and truncation
    • BERTology
    • Perplexity of fixed-length models
    • Pipelines for webserver inference
    • Model training anatomy
  • 🌍API
    • 🌍MAIN CLASSES
      • Agents and Tools
      • 🌍Auto Classes
        • Extending the Auto Classes
        • AutoConfig
        • AutoTokenizer
        • AutoFeatureExtractor
        • AutoImageProcessor
        • AutoProcessor
        • Generic model classes
          • AutoModel
          • TFAutoModel
          • FlaxAutoModel
        • Generic pretraining classes
          • AutoModelForPreTraining
          • TFAutoModelForPreTraining
          • FlaxAutoModelForPreTraining
        • Natural Language Processing
          • AutoModelForCausalLM
          • TFAutoModelForCausalLM
          • FlaxAutoModelForCausalLM
          • AutoModelForMaskedLM
          • TFAutoModelForMaskedLM
          • FlaxAutoModelForMaskedLM
          • AutoModelForMaskGenerationge
          • TFAutoModelForMaskGeneration
          • AutoModelForSeq2SeqLM
          • TFAutoModelForSeq2SeqLM
          • FlaxAutoModelForSeq2SeqLM
          • AutoModelForSequenceClassification
          • TFAutoModelForSequenceClassification
          • FlaxAutoModelForSequenceClassification
          • AutoModelForMultipleChoice
          • TFAutoModelForMultipleChoice
          • FlaxAutoModelForMultipleChoice
          • AutoModelForNextSentencePrediction
          • TFAutoModelForNextSentencePrediction
          • FlaxAutoModelForNextSentencePrediction
          • AutoModelForTokenClassification
          • TFAutoModelForTokenClassification
          • FlaxAutoModelForTokenClassification
          • AutoModelForQuestionAnswering
          • TFAutoModelForQuestionAnswering
          • FlaxAutoModelForQuestionAnswering
          • AutoModelForTextEncoding
          • TFAutoModelForTextEncoding
        • Computer vision
          • AutoModelForDepthEstimation
          • AutoModelForImageClassification
          • TFAutoModelForImageClassification
          • FlaxAutoModelForImageClassification
          • AutoModelForVideoClassification
          • AutoModelForMaskedImageModeling
          • TFAutoModelForMaskedImageModeling
          • AutoModelForObjectDetection
          • AutoModelForImageSegmentation
          • AutoModelForImageToImage
          • AutoModelForSemanticSegmentation
          • TFAutoModelForSemanticSegmentation
          • AutoModelForInstanceSegmentation
          • AutoModelForUniversalSegmentation
          • AutoModelForZeroShotImageClassification
          • TFAutoModelForZeroShotImageClassification
          • AutoModelForZeroShotObjectDetection
        • Audio
          • AutoModelForAudioClassification
          • AutoModelForAudioFrameClassification
          • TFAutoModelForAudioFrameClassification
          • AutoModelForCTC
          • AutoModelForSpeechSeq2Seq
          • TFAutoModelForSpeechSeq2Seq
          • FlaxAutoModelForSpeechSeq2Seq
          • AutoModelForAudioXVector
          • AutoModelForTextToSpectrogram
          • AutoModelForTextToWaveform
        • Multimodal
          • AutoModelForTableQuestionAnswering
          • TFAutoModelForTableQuestionAnswering
          • AutoModelForDocumentQuestionAnswering
          • TFAutoModelForDocumentQuestionAnswering
          • AutoModelForVisualQuestionAnswering
          • AutoModelForVision2Seq
          • TFAutoModelForVision2Seq
          • FlaxAutoModelForVision2Seq
      • Callbacks
      • Configuration
      • Data Collator
      • Keras callbacks
      • Logging
      • Models
      • Text Generation
      • ONNX
      • Optimization
      • Model outputs
      • Pipelines
      • Processors
      • Quantization
      • Tokenizer
      • Trainer
      • DeepSpeed Integration
      • Feature Extractor
      • Image Processor
    • 🌍MODELS
      • 🌍TEXT MODELS
        • ALBERT
        • BART
        • BARThez
        • BARTpho
        • BERT
        • BertGeneration
        • BertJapanese
        • Bertweet
        • BigBird
        • BigBirdPegasus
        • BioGpt
        • Blenderbot
        • Blenderbot Small
        • BLOOM
        • BORT
        • ByT5
        • CamemBERT
        • CANINE
        • CodeGen
        • CodeLlama
        • ConvBERT
        • CPM
        • CPMANT
        • CTRL
        • DeBERTa
        • DeBERTa-v2
        • DialoGPT
        • DistilBERT
        • DPR
        • ELECTRA
        • Encoder Decoder Models
        • ERNIE
        • ErnieM
        • ESM
        • Falcon
        • FLAN-T5
        • FLAN-UL2
        • FlauBERT
        • FNet
        • FSMT
        • Funnel Transformer
        • GPT
        • GPT Neo
        • GPT NeoX
        • GPT NeoX Japanese
        • GPT-J
        • GPT2
        • GPTBigCode
        • GPTSAN Japanese
        • GPTSw3
        • HerBERT
        • I-BERT
        • Jukebox
        • LED
        • LLaMA
        • LLama2
        • Longformer
        • LongT5
        • LUKE
        • M2M100
        • MarianMT
        • MarkupLM
        • MBart and MBart-50
        • MEGA
        • MegatronBERT
        • MegatronGPT2
        • Mistral
        • mLUKE
        • MobileBERT
        • MPNet
        • MPT
        • MRA
        • MT5
        • MVP
        • NEZHA
        • NLLB
        • NLLB-MoE
        • Nyströmformer
        • Open-Llama
        • OPT
        • Pegasus
        • PEGASUS-X
        • Persimmon
        • PhoBERT
        • PLBart
        • ProphetNet
        • QDQBert
        • RAG
        • REALM
        • Reformer
        • RemBERT
        • RetriBERT
        • RoBERTa
        • RoBERTa-PreLayerNorm
        • RoCBert
        • RoFormer
        • RWKV
        • Splinter
        • SqueezeBERT
        • SwitchTransformers
        • T5
        • T5v1.1
        • TAPEX
        • Transformer XL
        • UL2
        • UMT5
        • X-MOD
        • XGLM
        • XLM
        • XLM-ProphetNet
        • XLM-RoBERTa
        • XLM-RoBERTa-XL
        • XLM-V
        • XLNet
        • YOSO
      • 🌍VISION MODELS
        • BEiT
        • BiT
        • Conditional DETR
        • ConvNeXT
        • ConvNeXTV2
        • CvT
        • Deformable DETR
        • DeiT
        • DETA
        • DETR
        • DiNAT
        • DINO V2
        • DiT
        • DPT
        • EfficientFormer
        • EfficientNet
        • FocalNet
        • GLPN
        • ImageGPT
        • LeViT
        • Mask2Former
        • MaskFormer
        • MobileNetV1
        • MobileNetV2
        • MobileViT
        • MobileViTV2
        • NAT
        • PoolFormer
        • Pyramid Vision Transformer (PVT)
        • RegNet
        • ResNet
        • SegFormer
        • SwiftFormer
        • Swin Transformer
        • Swin Transformer V2
        • Swin2SR
        • Table Transformer
        • TimeSformer
        • UperNet
        • VAN
        • VideoMAE
        • Vision Transformer (ViT)
        • ViT Hybrid
        • ViTDet
        • ViTMAE
        • ViTMatte
        • ViTMSN
        • ViViT
        • YOLOS
      • 🌍AUDIO MODELS
        • Audio Spectrogram Transformer
        • Bark
        • CLAP
        • EnCodec
        • Hubert
        • MCTCT
        • MMS
        • MusicGen
        • Pop2Piano
        • SEW
        • SEW-D
        • Speech2Text
        • Speech2Text2
        • SpeechT5
        • UniSpeech
        • UniSpeech-SAT
        • VITS
        • Wav2Vec2
        • Wav2Vec2-Conformer
        • Wav2Vec2Phoneme
        • WavLM
        • Whisper
        • XLS-R
        • XLSR-Wav2Vec2
      • 🌍MULTIMODAL MODELS
        • ALIGN
        • AltCLIP
        • BLIP
        • BLIP-2
        • BridgeTower
        • BROS
        • Chinese-CLIP
        • CLIP
        • CLIPSeg
        • Data2Vec
        • DePlot
        • Donut
        • FLAVA
        • GIT
        • GroupViT
        • IDEFICS
        • InstructBLIP
        • LayoutLM
        • LayoutLMV2
        • LayoutLMV3
        • LayoutXLM
        • LiLT
        • LXMERT
        • MatCha
        • MGP-STR
        • Nougat
        • OneFormer
        • OWL-ViT
        • Perceiver
        • Pix2Struct
        • Segment Anything
        • Speech Encoder Decoder Models
        • TAPAS
        • TrOCR
        • TVLT
        • ViLT
        • Vision Encoder Decoder Models
        • Vision Text Dual Encoder
        • VisualBERT
        • X-CLIP
      • 🌍REINFORCEMENT LEARNING MODELS
        • Decision Transformer
        • Trajectory Transformer
      • 🌍TIME SERIES MODELS
        • Autoformer
        • Informer
        • Time Series Transformer
      • 🌍GRAPH MODELS
        • Graphormer
  • 🌍INTERNAL HELPERS
    • Custom Layers and Utilities
    • Utilities for pipelines
    • Utilities for Tokenizers
    • Utilities for Trainer
    • Utilities for Generation
    • Utilities for Image Processors
    • Utilities for Audio processing
    • General Utilities
    • Utilities for Time Series
Powered by GitBook
On this page
  1. API
  2. MAIN CLASSES
  3. Auto Classes
  4. Generic model classes

TFAutoModel

PreviousAutoModelNextFlaxAutoModel

Last updated 1 year ago

class transformers.TFAutoModel

( *args**kwargs )

This is a generic model class that will be instantiated as one of the base model classes of the library when created with the class method or the class method.

This class cannot be instantiated directly using __init__() (throws an error).

from_config

( **kwargs )

Parameters

  • config () — The model class to instantiate is selected based on the configuration class:

    • configuration class: (ALBERT model)

    • configuration class: (BART model)

    • configuration class: (BERT model)

    • configuration class: (Blenderbot model)

    • configuration class: (BlenderbotSmall model)

    • configuration class: (BLIP model)

    • configuration class: (CLIP model)

    • configuration class: (CTRL model)

    • configuration class: (CamemBERT model)

    • configuration class: (ConvBERT model)

    • configuration class: (ConvNeXT model)

    • configuration class: (CvT model)

    • configuration class: (DPR model)

    • configuration class: (Data2VecVision model)

    • configuration class: (DeBERTa model)

    • configuration class: (DeBERTa-v2 model)

    • configuration class: (DeiT model)

    • configuration class: (DistilBERT model)

    • configuration class: (EfficientFormer model)

    • configuration class: (ELECTRA model)

    • configuration class: (ESM model)

    • configuration class: (FlauBERT model)

    • configuration class: or (Funnel Transformer model)

    • configuration class: (OpenAI GPT-2 model)

    • configuration class: (GPT-J model)

    • configuration class: (GroupViT model)

    • configuration class: (Hubert model)

    • configuration class: (LED model)

    • configuration class: (LayoutLM model)

    • configuration class: (LayoutLMv3 model)

    • configuration class: (Longformer model)

    • configuration class: (LXMERT model)

    • configuration class: (mBART model)

    • configuration class: (MPNet model)

    • configuration class: (MT5 model)

    • configuration class: (Marian model)

    • configuration class: (MobileBERT model)

    • configuration class: (MobileViT model)

    • configuration class: (OPT model)

    • configuration class: (OpenAI GPT model)

    • configuration class: (Pegasus model)

    • configuration class: (RegNet model)

    • configuration class: (RemBERT model)

    • configuration class: (ResNet model)

    • configuration class: (RoFormer model)

    • configuration class: (RoBERTa model)

    • configuration class: (RoBERTa-PreLayerNorm model)

    • configuration class: (SAM model)

    • configuration class: (SegFormer model)

    • configuration class: (Speech2Text model)

    • configuration class: (Swin Transformer model)

    • configuration class: (T5 model)

    • configuration class: (TAPAS model)

    • configuration class: (Transformer-XL model)

    • configuration class: (ViT model)

    • configuration class: (ViTMAE model)

    • configuration class: (VisionTextDualEncoder model)

    • configuration class: (Wav2Vec2 model)

    • configuration class: (Whisper model)

    • configuration class: (XGLM model)

    • configuration class: (XLM model)

    • configuration class: (XLM-RoBERTa model)

    • configuration class: (XLNet model)

Instantiates one of the base model classes of the library from a configuration.

Examples:

Copied

>>> from transformers import AutoConfig, TFAutoModel

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("bert-base-cased")
>>> model = TFAutoModel.from_config(config)

from_pretrained

( *model_args**kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.

    • A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.

  • model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method.

    • The model is a model provided by the library (loaded with the model id string of a pretrained model).

    • The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory.

  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

  • from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument).

  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

  • resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.

  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.

  • output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.

  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model).

  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

  • trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

  • code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

  • kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded:

    • If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done)

Instantiate one of the base model classes of the library from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path:

Examples:

Copied

>>> from transformers import AutoConfig, TFAutoModel

>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModel.from_pretrained("bert-base-cased")

>>> # Update configuration during loading
>>> model = TFAutoModel.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True

>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModel.from_pretrained(
...     "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )

Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use to load the model weights.

A path to a directory containing model weights saved using , e.g., ./my_model_directory/.

config (, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:

The model was saved using and is reloaded by supplying the save directory.

If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function.

albert — (ALBERT model)

bart — (BART model)

bert — (BERT model)

blenderbot — (Blenderbot model)

blenderbot-small — (BlenderbotSmall model)

blip — (BLIP model)

camembert — (CamemBERT model)

clip — (CLIP model)

convbert — (ConvBERT model)

convnext — (ConvNeXT model)

ctrl — (CTRL model)

cvt — (CvT model)

data2vec-vision — (Data2VecVision model)

deberta — (DeBERTa model)

deberta-v2 — (DeBERTa-v2 model)

deit — (DeiT model)

distilbert — (DistilBERT model)

dpr — (DPR model)

efficientformer — (EfficientFormer model)

electra — (ELECTRA model)

esm — (ESM model)

flaubert — (FlauBERT model)

funnel — or (Funnel Transformer model)

gpt-sw3 — (GPT-Sw3 model)

gpt2 — (OpenAI GPT-2 model)

gptj — (GPT-J model)

groupvit — (GroupViT model)

hubert — (Hubert model)

layoutlm — (LayoutLM model)

layoutlmv3 — (LayoutLMv3 model)

led — (LED model)

longformer — (Longformer model)

lxmert — (LXMERT model)

marian — (Marian model)

mbart — (mBART model)

mobilebert — (MobileBERT model)

mobilevit — (MobileViT model)

mpnet — (MPNet model)

mt5 — (MT5 model)

openai-gpt — (OpenAI GPT model)

opt — (OPT model)

pegasus — (Pegasus model)

regnet — (RegNet model)

rembert — (RemBERT model)

resnet — (ResNet model)

roberta — (RoBERTa model)

roberta-prelayernorm — (RoBERTa-PreLayerNorm model)

roformer — (RoFormer model)

sam — (SAM model)

segformer — (SegFormer model)

speech_to_text — (Speech2Text model)

swin — (Swin Transformer model)

t5 — (T5 model)

tapas — (TAPAS model)

transfo-xl — (Transformer-XL model)

vision-text-dual-encoder — (VisionTextDualEncoder model)

vit — (ViT model)

vit_mae — (ViTMAE model)

wav2vec2 — (Wav2Vec2 model)

whisper — (Whisper model)

xglm — (XGLM model)

xlm — (XLM model)

xlm-roberta — (XLM-RoBERTa model)

xlnet — (XLNet model)

🌍
🌍
🌍
<source>
from_pretrained()
from_config()
<source>
PretrainedConfig
AlbertConfig
TFAlbertModel
BartConfig
TFBartModel
BertConfig
TFBertModel
BlenderbotConfig
TFBlenderbotModel
BlenderbotSmallConfig
TFBlenderbotSmallModel
BlipConfig
TFBlipModel
CLIPConfig
TFCLIPModel
CTRLConfig
TFCTRLModel
CamembertConfig
TFCamembertModel
ConvBertConfig
TFConvBertModel
ConvNextConfig
TFConvNextModel
CvtConfig
TFCvtModel
DPRConfig
TFDPRQuestionEncoder
Data2VecVisionConfig
TFData2VecVisionModel
DebertaConfig
TFDebertaModel
DebertaV2Config
TFDebertaV2Model
DeiTConfig
TFDeiTModel
DistilBertConfig
TFDistilBertModel
EfficientFormerConfig
TFEfficientFormerModel
ElectraConfig
TFElectraModel
EsmConfig
TFEsmModel
FlaubertConfig
TFFlaubertModel
FunnelConfig
TFFunnelModel
TFFunnelBaseModel
GPT2Config
TFGPT2Model
GPTJConfig
TFGPTJModel
GroupViTConfig
TFGroupViTModel
HubertConfig
TFHubertModel
LEDConfig
TFLEDModel
LayoutLMConfig
TFLayoutLMModel
LayoutLMv3Config
TFLayoutLMv3Model
LongformerConfig
TFLongformerModel
LxmertConfig
TFLxmertModel
MBartConfig
TFMBartModel
MPNetConfig
TFMPNetModel
MT5Config
TFMT5Model
MarianConfig
TFMarianModel
MobileBertConfig
TFMobileBertModel
MobileViTConfig
TFMobileViTModel
OPTConfig
TFOPTModel
OpenAIGPTConfig
TFOpenAIGPTModel
PegasusConfig
TFPegasusModel
RegNetConfig
TFRegNetModel
RemBertConfig
TFRemBertModel
ResNetConfig
TFResNetModel
RoFormerConfig
TFRoFormerModel
RobertaConfig
TFRobertaModel
RobertaPreLayerNormConfig
TFRobertaPreLayerNormModel
SamConfig
TFSamModel
SegformerConfig
TFSegformerModel
Speech2TextConfig
TFSpeech2TextModel
SwinConfig
TFSwinModel
T5Config
TFT5Model
TapasConfig
TFTapasModel
TransfoXLConfig
TFTransfoXLModel
ViTConfig
TFViTModel
ViTMAEConfig
TFViTMAEModel
VisionTextDualEncoderConfig
TFVisionTextDualEncoderModel
Wav2Vec2Config
TFWav2Vec2Model
WhisperConfig
TFWhisperModel
XGLMConfig
TFXGLMModel
XLMConfig
TFXLMModel
XLMRobertaConfig
TFXLMRobertaModel
XLNetConfig
TFXLNetModel
from_pretrained()
<source>
save_pretrained()
PretrainedConfig
save_pretrained()
from_pretrained()
TFAlbertModel
TFBartModel
TFBertModel
TFBlenderbotModel
TFBlenderbotSmallModel
TFBlipModel
TFCamembertModel
TFCLIPModel
TFConvBertModel
TFConvNextModel
TFCTRLModel
TFCvtModel
TFData2VecVisionModel
TFDebertaModel
TFDebertaV2Model
TFDeiTModel
TFDistilBertModel
TFDPRQuestionEncoder
TFEfficientFormerModel
TFElectraModel
TFEsmModel
TFFlaubertModel
TFFunnelModel
TFFunnelBaseModel
TFGPT2Model
TFGPT2Model
TFGPTJModel
TFGroupViTModel
TFHubertModel
TFLayoutLMModel
TFLayoutLMv3Model
TFLEDModel
TFLongformerModel
TFLxmertModel
TFMarianModel
TFMBartModel
TFMobileBertModel
TFMobileViTModel
TFMPNetModel
TFMT5Model
TFOpenAIGPTModel
TFOPTModel
TFPegasusModel
TFRegNetModel
TFRemBertModel
TFResNetModel
TFRobertaModel
TFRobertaPreLayerNormModel
TFRoFormerModel
TFSamModel
TFSegformerModel
TFSpeech2TextModel
TFSwinModel
TFT5Model
TFTapasModel
TFTransfoXLModel
TFVisionTextDualEncoderModel
TFViTModel
TFViTMAEModel
TFWav2Vec2Model
TFWhisperModel
TFXGLMModel
TFXLMModel
TFXLMRobertaModel
TFXLNetModel