Transformers
  • 🌍GET STARTED
    • Transformers
    • Quick tour
    • Installation
  • 🌍TUTORIALS
    • Run inference with pipelines
    • Write portable code with AutoClass
    • Preprocess data
    • Fine-tune a pretrained model
    • Train with a script
    • Set up distributed training with BOINC AI Accelerate
    • Load and train adapters with BOINC AI PEFT
    • Share your model
    • Agents
    • Generation with LLMs
  • 🌍TASK GUIDES
    • 🌍NATURAL LANGUAGE PROCESSING
      • Text classification
      • Token classification
      • Question answering
      • Causal language modeling
      • Masked language modeling
      • Translation
      • Summarization
      • Multiple choice
    • 🌍AUDIO
      • Audio classification
      • Automatic speech recognition
    • 🌍COMPUTER VISION
      • Image classification
      • Semantic segmentation
      • Video classification
      • Object detection
      • Zero-shot object detection
      • Zero-shot image classification
      • Depth estimation
    • 🌍MULTIMODAL
      • Image captioning
      • Document Question Answering
      • Visual Question Answering
      • Text to speech
    • 🌍GENERATION
      • Customize the generation strategy
    • 🌍PROMPTING
      • Image tasks with IDEFICS
  • 🌍DEVELOPER GUIDES
    • Use fast tokenizers from BOINC AI Tokenizers
    • Run inference with multilingual models
    • Use model-specific APIs
    • Share a custom model
    • Templates for chat models
    • Run training on Amazon SageMaker
    • Export to ONNX
    • Export to TFLite
    • Export to TorchScript
    • Benchmarks
    • Notebooks with examples
    • Community resources
    • Custom Tools and Prompts
    • Troubleshoot
  • 🌍PERFORMANCE AND SCALABILITY
    • Overview
    • 🌍EFFICIENT TRAINING TECHNIQUES
      • Methods and tools for efficient training on a single GPU
      • Multiple GPUs and parallelism
      • Efficient training on CPU
      • Distributed CPU training
      • Training on TPUs
      • Training on TPU with TensorFlow
      • Training on Specialized Hardware
      • Custom hardware for training
      • Hyperparameter Search using Trainer API
    • 🌍OPTIMIZING INFERENCE
      • Inference on CPU
      • Inference on one GPU
      • Inference on many GPUs
      • Inference on Specialized Hardware
    • Instantiating a big model
    • Troubleshooting
    • XLA Integration for TensorFlow Models
    • Optimize inference using `torch.compile()`
  • 🌍CONTRIBUTE
    • How to contribute to transformers?
    • How to add a model to BOINC AI Transformers?
    • How to convert a BOINC AI Transformers model to TensorFlow?
    • How to add a pipeline to BOINC AI Transformers?
    • Testing
    • Checks on a Pull Request
  • 🌍CONCEPTUAL GUIDES
    • Philosophy
    • Glossary
    • What BOINC AI Transformers can do
    • How BOINC AI Transformers solve tasks
    • The Transformer model family
    • Summary of the tokenizers
    • Attention mechanisms
    • Padding and truncation
    • BERTology
    • Perplexity of fixed-length models
    • Pipelines for webserver inference
    • Model training anatomy
  • 🌍API
    • 🌍MAIN CLASSES
      • Agents and Tools
      • 🌍Auto Classes
        • Extending the Auto Classes
        • AutoConfig
        • AutoTokenizer
        • AutoFeatureExtractor
        • AutoImageProcessor
        • AutoProcessor
        • Generic model classes
          • AutoModel
          • TFAutoModel
          • FlaxAutoModel
        • Generic pretraining classes
          • AutoModelForPreTraining
          • TFAutoModelForPreTraining
          • FlaxAutoModelForPreTraining
        • Natural Language Processing
          • AutoModelForCausalLM
          • TFAutoModelForCausalLM
          • FlaxAutoModelForCausalLM
          • AutoModelForMaskedLM
          • TFAutoModelForMaskedLM
          • FlaxAutoModelForMaskedLM
          • AutoModelForMaskGenerationge
          • TFAutoModelForMaskGeneration
          • AutoModelForSeq2SeqLM
          • TFAutoModelForSeq2SeqLM
          • FlaxAutoModelForSeq2SeqLM
          • AutoModelForSequenceClassification
          • TFAutoModelForSequenceClassification
          • FlaxAutoModelForSequenceClassification
          • AutoModelForMultipleChoice
          • TFAutoModelForMultipleChoice
          • FlaxAutoModelForMultipleChoice
          • AutoModelForNextSentencePrediction
          • TFAutoModelForNextSentencePrediction
          • FlaxAutoModelForNextSentencePrediction
          • AutoModelForTokenClassification
          • TFAutoModelForTokenClassification
          • FlaxAutoModelForTokenClassification
          • AutoModelForQuestionAnswering
          • TFAutoModelForQuestionAnswering
          • FlaxAutoModelForQuestionAnswering
          • AutoModelForTextEncoding
          • TFAutoModelForTextEncoding
        • Computer vision
          • AutoModelForDepthEstimation
          • AutoModelForImageClassification
          • TFAutoModelForImageClassification
          • FlaxAutoModelForImageClassification
          • AutoModelForVideoClassification
          • AutoModelForMaskedImageModeling
          • TFAutoModelForMaskedImageModeling
          • AutoModelForObjectDetection
          • AutoModelForImageSegmentation
          • AutoModelForImageToImage
          • AutoModelForSemanticSegmentation
          • TFAutoModelForSemanticSegmentation
          • AutoModelForInstanceSegmentation
          • AutoModelForUniversalSegmentation
          • AutoModelForZeroShotImageClassification
          • TFAutoModelForZeroShotImageClassification
          • AutoModelForZeroShotObjectDetection
        • Audio
          • AutoModelForAudioClassification
          • AutoModelForAudioFrameClassification
          • TFAutoModelForAudioFrameClassification
          • AutoModelForCTC
          • AutoModelForSpeechSeq2Seq
          • TFAutoModelForSpeechSeq2Seq
          • FlaxAutoModelForSpeechSeq2Seq
          • AutoModelForAudioXVector
          • AutoModelForTextToSpectrogram
          • AutoModelForTextToWaveform
        • Multimodal
          • AutoModelForTableQuestionAnswering
          • TFAutoModelForTableQuestionAnswering
          • AutoModelForDocumentQuestionAnswering
          • TFAutoModelForDocumentQuestionAnswering
          • AutoModelForVisualQuestionAnswering
          • AutoModelForVision2Seq
          • TFAutoModelForVision2Seq
          • FlaxAutoModelForVision2Seq
      • Callbacks
      • Configuration
      • Data Collator
      • Keras callbacks
      • Logging
      • Models
      • Text Generation
      • ONNX
      • Optimization
      • Model outputs
      • Pipelines
      • Processors
      • Quantization
      • Tokenizer
      • Trainer
      • DeepSpeed Integration
      • Feature Extractor
      • Image Processor
    • 🌍MODELS
      • 🌍TEXT MODELS
        • ALBERT
        • BART
        • BARThez
        • BARTpho
        • BERT
        • BertGeneration
        • BertJapanese
        • Bertweet
        • BigBird
        • BigBirdPegasus
        • BioGpt
        • Blenderbot
        • Blenderbot Small
        • BLOOM
        • BORT
        • ByT5
        • CamemBERT
        • CANINE
        • CodeGen
        • CodeLlama
        • ConvBERT
        • CPM
        • CPMANT
        • CTRL
        • DeBERTa
        • DeBERTa-v2
        • DialoGPT
        • DistilBERT
        • DPR
        • ELECTRA
        • Encoder Decoder Models
        • ERNIE
        • ErnieM
        • ESM
        • Falcon
        • FLAN-T5
        • FLAN-UL2
        • FlauBERT
        • FNet
        • FSMT
        • Funnel Transformer
        • GPT
        • GPT Neo
        • GPT NeoX
        • GPT NeoX Japanese
        • GPT-J
        • GPT2
        • GPTBigCode
        • GPTSAN Japanese
        • GPTSw3
        • HerBERT
        • I-BERT
        • Jukebox
        • LED
        • LLaMA
        • LLama2
        • Longformer
        • LongT5
        • LUKE
        • M2M100
        • MarianMT
        • MarkupLM
        • MBart and MBart-50
        • MEGA
        • MegatronBERT
        • MegatronGPT2
        • Mistral
        • mLUKE
        • MobileBERT
        • MPNet
        • MPT
        • MRA
        • MT5
        • MVP
        • NEZHA
        • NLLB
        • NLLB-MoE
        • Nyströmformer
        • Open-Llama
        • OPT
        • Pegasus
        • PEGASUS-X
        • Persimmon
        • PhoBERT
        • PLBart
        • ProphetNet
        • QDQBert
        • RAG
        • REALM
        • Reformer
        • RemBERT
        • RetriBERT
        • RoBERTa
        • RoBERTa-PreLayerNorm
        • RoCBert
        • RoFormer
        • RWKV
        • Splinter
        • SqueezeBERT
        • SwitchTransformers
        • T5
        • T5v1.1
        • TAPEX
        • Transformer XL
        • UL2
        • UMT5
        • X-MOD
        • XGLM
        • XLM
        • XLM-ProphetNet
        • XLM-RoBERTa
        • XLM-RoBERTa-XL
        • XLM-V
        • XLNet
        • YOSO
      • 🌍VISION MODELS
        • BEiT
        • BiT
        • Conditional DETR
        • ConvNeXT
        • ConvNeXTV2
        • CvT
        • Deformable DETR
        • DeiT
        • DETA
        • DETR
        • DiNAT
        • DINO V2
        • DiT
        • DPT
        • EfficientFormer
        • EfficientNet
        • FocalNet
        • GLPN
        • ImageGPT
        • LeViT
        • Mask2Former
        • MaskFormer
        • MobileNetV1
        • MobileNetV2
        • MobileViT
        • MobileViTV2
        • NAT
        • PoolFormer
        • Pyramid Vision Transformer (PVT)
        • RegNet
        • ResNet
        • SegFormer
        • SwiftFormer
        • Swin Transformer
        • Swin Transformer V2
        • Swin2SR
        • Table Transformer
        • TimeSformer
        • UperNet
        • VAN
        • VideoMAE
        • Vision Transformer (ViT)
        • ViT Hybrid
        • ViTDet
        • ViTMAE
        • ViTMatte
        • ViTMSN
        • ViViT
        • YOLOS
      • 🌍AUDIO MODELS
        • Audio Spectrogram Transformer
        • Bark
        • CLAP
        • EnCodec
        • Hubert
        • MCTCT
        • MMS
        • MusicGen
        • Pop2Piano
        • SEW
        • SEW-D
        • Speech2Text
        • Speech2Text2
        • SpeechT5
        • UniSpeech
        • UniSpeech-SAT
        • VITS
        • Wav2Vec2
        • Wav2Vec2-Conformer
        • Wav2Vec2Phoneme
        • WavLM
        • Whisper
        • XLS-R
        • XLSR-Wav2Vec2
      • 🌍MULTIMODAL MODELS
        • ALIGN
        • AltCLIP
        • BLIP
        • BLIP-2
        • BridgeTower
        • BROS
        • Chinese-CLIP
        • CLIP
        • CLIPSeg
        • Data2Vec
        • DePlot
        • Donut
        • FLAVA
        • GIT
        • GroupViT
        • IDEFICS
        • InstructBLIP
        • LayoutLM
        • LayoutLMV2
        • LayoutLMV3
        • LayoutXLM
        • LiLT
        • LXMERT
        • MatCha
        • MGP-STR
        • Nougat
        • OneFormer
        • OWL-ViT
        • Perceiver
        • Pix2Struct
        • Segment Anything
        • Speech Encoder Decoder Models
        • TAPAS
        • TrOCR
        • TVLT
        • ViLT
        • Vision Encoder Decoder Models
        • Vision Text Dual Encoder
        • VisualBERT
        • X-CLIP
      • 🌍REINFORCEMENT LEARNING MODELS
        • Decision Transformer
        • Trajectory Transformer
      • 🌍TIME SERIES MODELS
        • Autoformer
        • Informer
        • Time Series Transformer
      • 🌍GRAPH MODELS
        • Graphormer
  • 🌍INTERNAL HELPERS
    • Custom Layers and Utilities
    • Utilities for pipelines
    • Utilities for Tokenizers
    • Utilities for Trainer
    • Utilities for Generation
    • Utilities for Image Processors
    • Utilities for Audio processing
    • General Utilities
    • Utilities for Time Series
Powered by GitBook
On this page
  • Pipeline
  • AutoClass
  • Custom model builds
  • Trainer - a PyTorch optimized training loop
  • Train with TensorFlow
  • What’s next?
  1. GET STARTED

Quick tour

PreviousTransformersNextInstallation

Last updated 1 year ago

Get up and running with 🌎 Transformers! Whether you’re a developer or an everyday user, this quick tour will help you get started and show you how to use the for inference, load a pretrained model and preprocessor with an , and quickly train a model with PyTorch or TensorFlow. If you’re a beginner, we recommend checking out our tutorials or next for more in-depth explanations of the concepts introduced here.

Before you begin, make sure you have all the necessary libraries installed:

Copied

!pip install transformers datasets

You’ll also need to install your preferred machine learning framework:

PytorchHide Pytorch contentCopied

pip install torch

TensorFlowHide TensorFlow contentCopied

pip install tensorflow

Pipeline

The is the easiest and fastest way to use a pretrained model for inference. You can use the out-of-the-box for many tasks across different modalities, some of which are shown in the table below:

For a complete list of available tasks, check out the .

Task

Description

Modality

Pipeline identifier

Text classification

assign a label to a given sequence of text

NLP

pipeline(task=“sentiment-analysis”)

Text generation

generate text given a prompt

NLP

pipeline(task=“text-generation”)

Summarization

generate a summary of a sequence of text or document

NLP

pipeline(task=“summarization”)

Image classification

assign a label to an image

Computer vision

pipeline(task=“image-classification”)

Image segmentation

assign a label to each individual pixel of an image (supports semantic, panoptic, and instance segmentation)

Computer vision

pipeline(task=“image-segmentation”)

Object detection

predict the bounding boxes and classes of objects in an image

Computer vision

pipeline(task=“object-detection”)

Audio classification

assign a label to some audio data

Audio

pipeline(task=“audio-classification”)

Automatic speech recognition

transcribe speech into text

Audio

pipeline(task=“automatic-speech-recognition”)

Visual question answering

answer a question about the image, given an image and a question

Multimodal

pipeline(task=“vqa”)

Document question answering

answer a question about the document, given a document and a question

Multimodal

pipeline(task=“document-question-answering”)

Image captioning

generate a caption for a given image

Multimodal

pipeline(task=“image-to-text”)

Copied

>>> from transformers import pipeline

>>> classifier = pipeline("sentiment-analysis")

Copied

>>> classifier("We are very happy to show you the 🌎 Transformers library.")
[{'label': 'POSITIVE', 'score': 0.9998}]

Copied

>>> results = classifier(["We are very happy to show you the 🌎 Transformers library.", "We hope you don't hate it."])
>>> for result in results:
...     print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
label: POSITIVE, with score: 0.9998
label: NEGATIVE, with score: 0.5309

Copied

>>> import torch
>>> from transformers import pipeline

>>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h")

Copied

>>> from datasets import load_dataset, Audio

>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")

Copied

>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate))

The audio files are automatically loaded and resampled when calling the "audio" column. Extract the raw waveform arrays from the first 4 samples and pass it as a list to the pipeline:

Copied

>>> result = speech_recognizer(dataset[:4]["audio"])
>>> print([d["text"] for d in result])
['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT']

Use another model and tokenizer in the pipeline

Copied

>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"

PytorchHide Pytorch content

Copied

>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification

>>> model = AutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)

TensorFlowHide TensorFlow content

Copied

>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)

Copied

>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> classifier("Nous sommes très heureux de vous présenter la bibliothèque 🌎 Transformers.")
[{'label': '5 stars', 'score': 0.7273}]

AutoClass

AutoTokenizer

Copied

>>> from transformers import AutoTokenizer

>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)

Pass your text to the tokenizer:

Copied

>>> encoding = tokenizer("We are very happy to show you the 🌎 Transformers library.")
>>> print(encoding)
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}

The tokenizer returns a dictionary containing:

A tokenizer can also accept a list of inputs, and pad and truncate the text to return a batch with uniform length:

PytorchHide Pytorch contentCopied

>>> pt_batch = tokenizer(
...     ["We are very happy to show you the 🌎 Transformers library.", "We hope you don't hate it."],
...     padding=True,
...     truncation=True,
...     max_length=512,
...     return_tensors="pt",
... )

TensorFlowHide TensorFlow contentCopied

>>> tf_batch = tokenizer(
...     ["We are very happy to show you the 🌎 Transformers library.", "We hope you don't hate it."],
...     padding=True,
...     truncation=True,
...     max_length=512,
...     return_tensors="tf",
... )

AutoModel

PytorchHide Pytorch content

Copied

>>> from transformers import AutoModelForSequenceClassification

>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)

Now pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding **:

Copied

>>> pt_outputs = pt_model(**pt_batch)

The model outputs the final activations in the logits attribute. Apply the softmax function to the logits to retrieve the probabilities:

Copied

>>> from torch import nn

>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
>>> print(pt_predictions)
tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],
        [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>)

TensorFlowHide TensorFlow content

Copied

>>> from transformers import TFAutoModelForSequenceClassification

>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)

Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is:

Copied

>>> tf_outputs = tf_model(tf_batch)

The model outputs the final activations in the logits attribute. Apply the softmax function to the logits to retrieve the probabilities:

Copied

>>> import tensorflow as tf

>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf_predictions

All 🌎 Transformers models (PyTorch or TensorFlow) output the tensors before the final activation function (like softmax) because the final activation function is often fused with the loss. Model outputs are special dataclasses so their attributes are autocompleted in an IDE. The model outputs behave like a tuple or a dictionary (you can index with an integer, a slice or a string) in which case, attributes that are None are ignored.

Save a model

PytorchHide Pytorch content

Copied

>>> pt_save_directory = "./pt_save_pretrained"
>>> tokenizer.save_pretrained(pt_save_directory)
>>> pt_model.save_pretrained(pt_save_directory)

Copied

>>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")

TensorFlowHide TensorFlow content

Copied

>>> tf_save_directory = "./tf_save_pretrained"
>>> tokenizer.save_pretrained(tf_save_directory)
>>> tf_model.save_pretrained(tf_save_directory)

Copied

>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")

One particularly cool 🌎 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The from_pt or from_tf parameter can convert the model from one framework to the other:

PytorchHide Pytorch contentCopied

>>> from transformers import AutoModel

>>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)

TensorFlowHide TensorFlow contentCopied

>>> from transformers import TFAutoModel

>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)

Custom model builds

You can modify the model’s configuration class to change how a model is built. The configuration specifies a model’s attributes, such as the number of hidden layers or attention heads. You start from scratch when you initialize a model from a custom configuration class. The model attributes are randomly initialized, and you’ll need to train the model before you can use it to get meaningful results.

Copied

>>> from transformers import AutoConfig

>>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12)

PytorchHide Pytorch content

Copied

>>> from transformers import AutoModel

>>> my_model = AutoModel.from_config(my_config)

TensorFlowHide TensorFlow content

Copied

>>> from transformers import TFAutoModel

>>> my_model = TFAutoModel.from_config(my_config)

Trainer - a PyTorch optimized training loop

  1. Copied

    >>> from transformers import AutoModelForSequenceClassification
    
    >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
  2. Copied

    >>> from transformers import TrainingArguments
    
    >>> training_args = TrainingArguments(
    ...     output_dir="path/to/save/folder/",
    ...     learning_rate=2e-5,
    ...     per_device_train_batch_size=8,
    ...     per_device_eval_batch_size=8,
    ...     num_train_epochs=2,
    ... )
  3. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:

    Copied

    >>> from transformers import AutoTokenizer
    
    >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
  4. Load a dataset:

    Copied

    >>> from datasets import load_dataset
    
    >>> dataset = load_dataset("rotten_tomatoes")  # doctest: +IGNORE_RESULT
  5. Create a function to tokenize the dataset:

    Copied

    >>> def tokenize_dataset(dataset):
    ...     return tokenizer(dataset["text"])

    Copied

    >>> dataset = dataset.map(tokenize_dataset, batched=True)
  6. Copied

    >>> from transformers import DataCollatorWithPadding
    
    >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer)

Copied

>>> from transformers import Trainer

>>> trainer = Trainer(
...     model=model,
...     args=training_args,
...     train_dataset=dataset["train"],
...     eval_dataset=dataset["test"],
...     tokenizer=tokenizer,
...     data_collator=data_collator,
... )  # doctest: +SKIP

Copied

>>> trainer.train()

Train with TensorFlow

  1. Copied

    >>> from transformers import TFAutoModelForSequenceClassification
    
    >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
  2. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:

    Copied

    >>> from transformers import AutoTokenizer
    
    >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
  3. Create a function to tokenize the dataset:

    Copied

    >>> def tokenize_dataset(dataset):
    ...     return tokenizer(dataset["text"])  # doctest: +SKIP
  4. Copied

    >>> dataset = dataset.map(tokenize_dataset)  # doctest: +SKIP
    >>> tf_dataset = model.prepare_tf_dataset(
    ...     dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer
    ... )  # doctest: +SKIP
  5. When you’re ready, you can call compile and fit to start training. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:

    Copied

    >>> from tensorflow.keras.optimizers import Adam
    
    >>> model.compile(optimizer=Adam(3e-5))  # No loss argument!
    >>> model.fit(tf_dataset)  # doctest: +SKIP

What’s next?

Now that you’ve completed the🌎 Transformers quick tour, check out our guides and learn how to do more specific things like writing a custom model, fine-tuning a model for a task, and how to train a model with a script. If you’re interested in learning more about 🌎 Transformers core concepts, grab a cup of coffee and take a look at our Conceptual Guides!

Start by creating an instance of and specifying a task you want to use it for. In this guide, you’ll use the for sentiment analysis as an example:

The downloads and caches a default and tokenizer for sentiment analysis. Now you can use the classifier on your target text:

If you have more than one input, pass your inputs as a list to the to return a list of dictionaries:

The can also iterate over an entire dataset for any task you like. For this example, let’s choose automatic speech recognition as our task:

Load an audio dataset (see the 🌎 Datasets for more details) you’d like to iterate over. For example, load the dataset:

You need to make sure the sampling rate of the dataset matches the sampling rate was trained on:

For larger datasets where the inputs are big (like in speech or vision), you’ll want to pass a generator instead of a list to load all the inputs in memory. Take a look at the for more information.

The can accommodate any model from the , making it easy to adapt the for other use-cases. For example, if you’d like a model capable of handling French text, use the tags on the Hub to filter for an appropriate model. The top filtered result returns a multilingual finetuned for sentiment analysis you can use for French text:

Use and to load the pretrained model and it’s associated tokenizer (more on an AutoClass in the next section):

Use and to load the pretrained model and it’s associated tokenizer (more on an TFAutoClass in the next section):

Specify the model and tokenizer in the , and now you can apply the classifier on French text:

If you can’t find a model for your use-case, you’ll need to finetune a pretrained model on your data. Take a look at our to learn how. Finally, after you’ve finetuned your pretrained model, please consider the model with the community on the Hub to democratize machine learning for everyone!🌎

Under the hood, the and classes work together to power the you used above. An is a shortcut that automatically retrieves the architecture of a pretrained model from its name or path. You only need to select the appropriate AutoClass for your task and it’s associated preprocessing class.

Let’s return to the example from the previous section and see how you can use the AutoClass to replicate the results of the .

A tokenizer is responsible for preprocessing text into an array of numbers as inputs to a model. There are multiple rules that govern the tokenization process, including how to split a word and at what level words should be split (learn more about tokenization in the ). The most important thing to remember is you need to instantiate a tokenizer with the same model name to ensure you’re using the same tokenization rules a model was pretrained with.

Load a tokenizer with :

: numerical representations of your tokens.

: indicates which tokens should be attended to.

Check out the tutorial for more details about tokenization, and how to use an , and to preprocess image, audio, and multimodal inputs.

🌎 Transformers provides a simple and unified way to load pretrained instances. This means you can load an like you would load an . The only difference is selecting the correct for the task. For text (or sequence) classification, you should load :

See the for tasks supported by an class.

🌎 Transformers provides a simple and unified way to load pretrained instances. This means you can load an like you would load an . The only difference is selecting the correct for the task. For text (or sequence) classification, you should load :

See the for tasks supported by an class.

Once your model is fine-tuned, you can save it with its tokenizer using :

When you are ready to use the model again, reload it with :

Once your model is fine-tuned, you can save it with its tokenizer using :

When you are ready to use the model again, reload it with :

Start by importing , and then load the pretrained model you want to modify. Within , you can specify the attribute you want to change, such as the number of attention heads:

Create a model from your custom configuration with :

Create a model from your custom configuration with :

Take a look at the guide for more information about building custom configurations.

All models are a standard so you can use them in any typical training loop. While you can write your own training loop, 🌎 Transformers provides a class for PyTorch, which contains the basic training loop and adds additional functionality for features like distributed training, mixed precision, and more.

Depending on your task, you’ll typically pass the following parameters to :

You’ll start with a or a :

contains the model hyperparameters you can change like learning rate, batch size, and the number of epochs to train for. The default values are used if you don’t specify any training arguments:

Then apply it over the entire dataset with :

A to create a batch of examples from your dataset:

Now gather all these classes in :

When you’re ready, call to start training:

For tasks - like translation or summarization - that use a sequence-to-sequence model, use the and classes instead.

You can customize the training loop behavior by subclassing the methods inside . This allows you to customize features such as the loss function, optimizer, and scheduler. Take a look at the reference for which methods can be subclassed.

The other way to customize the training loop is by using . You can use callbacks to integrate with other libraries and inspect the training loop to report on progress or stop the training early. Callbacks do not modify anything in the training loop itself. To customize something like the loss function, you need to subclass the instead.

All models are a standard so they can be trained in TensorFlow with the API. 🌎 Transformers provides the method to easily load your dataset as a tf.data.Dataset so you can start training right away with Keras’ and methods.

You’ll start with a or a :

Apply the tokenizer over the entire dataset with and then pass the dataset and tokenizer to . You can also change the batch size and shuffle the dataset here if you’d like:

🌍
pipeline()
AutoClass
course
pipeline()
pipeline()
pipeline API reference
pipeline()
pipeline()
pipeline()
pretrained model
pipeline()
pipeline()
Quick Start
MInDS-14
facebook/wav2vec2-base-960h
pipeline API reference
pipeline()
Hub
pipeline()
BERT model
AutoModelForSequenceClassification
AutoTokenizer
TFAutoModelForSequenceClassification
AutoTokenizer
pipeline()
finetuning tutorial
sharing
AutoModelForSequenceClassification
AutoTokenizer
pipeline()
AutoClass
pipeline()
tokenizer summary
AutoTokenizer
input_ids
attention_mask
preprocess
AutoImageProcessor
AutoFeatureExtractor
AutoProcessor
AutoModel
AutoTokenizer
AutoModel
AutoModelForSequenceClassification
task summary
AutoModel
TFAutoModel
AutoTokenizer
TFAutoModel
TFAutoModelForSequenceClassification
task summary
AutoModel
PreTrainedModel.save_pretrained()
PreTrainedModel.from_pretrained()
TFPreTrainedModel.save_pretrained()
TFPreTrainedModel.from_pretrained()
AutoConfig
AutoConfig.from_pretrained()
AutoModel.from_config()
TFAutoModel.from_config()
Create a custom architecture
torch.nn.Module
Trainer
Trainer
PreTrainedModel
torch.nn.Module
TrainingArguments
map
DataCollatorWithPadding
Trainer
train()
Seq2SeqTrainer
Seq2SeqTrainingArguments
Trainer
Trainer
Callbacks
Trainer
tf.keras.Model
Keras
prepare_tf_dataset()
compile
fit
TFPreTrainedModel
tf.keras.Model
map
prepare_tf_dataset()