AutoTokenizer
AutoTokenizer
class transformers.AutoTokenizer
( )
This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the AutoTokenizer.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
from_pretrained
( pretrained_model_name_or_path*inputs**kwargs )
Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g.,
./my_model_directory/
.A path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (like Bert or XLNet), e.g.:
./my_model_directory/vocab.txt
. (Not applicable to all derived classes)
inputs (additional positional arguments, optional) — Will be passed along to the Tokenizer
__init__()
method.config (PretrainedConfig, optional) — The configuration object used to determine the tokenizer class to instantiate.
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.subfolder (
str
, optional) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here.use_fast (
bool
, optional, defaults toTrue
) — Use a fast Rust-based tokenizer if it is supported for a given model. If a fast tokenizer is not available for a given model, a normal Python-based tokenizer is returned instead.tokenizer_type (
str
, optional) — Tokenizer type to be loaded.trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.kwargs (additional keyword arguments, optional) — Will be passed to the Tokenizer
__init__()
method. Can be used to set special tokens likebos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
. See parameters in the__init__()
for more details.
Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.
The tokenizer class to instantiate is selected based on the model_type
property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
albert — AlbertTokenizer or AlbertTokenizerFast (ALBERT model)
align — BertTokenizer or BertTokenizerFast (ALIGN model)
bark — BertTokenizer or BertTokenizerFast (Bark model)
bart — BartTokenizer or BartTokenizerFast (BART model)
barthez — BarthezTokenizer or BarthezTokenizerFast (BARThez model)
bartpho — BartphoTokenizer (BARTpho model)
bert — BertTokenizer or BertTokenizerFast (BERT model)
bert-generation — BertGenerationTokenizer (Bert Generation model)
bert-japanese — BertJapaneseTokenizer (BertJapanese model)
bertweet — BertweetTokenizer (BERTweet model)
big_bird — BigBirdTokenizer or BigBirdTokenizerFast (BigBird model)
bigbird_pegasus — PegasusTokenizer or PegasusTokenizerFast (BigBird-Pegasus model)
biogpt — BioGptTokenizer (BioGpt model)
blenderbot — BlenderbotTokenizer or BlenderbotTokenizerFast (Blenderbot model)
blenderbot-small — BlenderbotSmallTokenizer (BlenderbotSmall model)
blip — BertTokenizer or BertTokenizerFast (BLIP model)
blip-2 — GPT2Tokenizer or GPT2TokenizerFast (BLIP-2 model)
bloom — BloomTokenizerFast (BLOOM model)
bridgetower — RobertaTokenizer or RobertaTokenizerFast (BridgeTower model)
bros — BertTokenizer or BertTokenizerFast (BROS model)
byt5 — ByT5Tokenizer (ByT5 model)
camembert — CamembertTokenizer or CamembertTokenizerFast (CamemBERT model)
canine — CanineTokenizer (CANINE model)
chinese_clip — BertTokenizer or BertTokenizerFast (Chinese-CLIP model)
clap — RobertaTokenizer or RobertaTokenizerFast (CLAP model)
clip — CLIPTokenizer or CLIPTokenizerFast (CLIP model)
clipseg — CLIPTokenizer or CLIPTokenizerFast (CLIPSeg model)
code_llama — CodeLlamaTokenizer or CodeLlamaTokenizerFast (CodeLlama model)
codegen — CodeGenTokenizer or CodeGenTokenizerFast (CodeGen model)
convbert — ConvBertTokenizer or ConvBertTokenizerFast (ConvBERT model)
cpm — CpmTokenizer or CpmTokenizerFast (CPM model)
cpmant — CpmAntTokenizer (CPM-Ant model)
ctrl — CTRLTokenizer (CTRL model)
data2vec-audio — Wav2Vec2CTCTokenizer (Data2VecAudio model)
data2vec-text — RobertaTokenizer or RobertaTokenizerFast (Data2VecText model)
deberta — DebertaTokenizer or DebertaTokenizerFast (DeBERTa model)
deberta-v2 — DebertaV2Tokenizer or DebertaV2TokenizerFast (DeBERTa-v2 model)
distilbert — DistilBertTokenizer or DistilBertTokenizerFast (DistilBERT model)
dpr — DPRQuestionEncoderTokenizer or DPRQuestionEncoderTokenizerFast (DPR model)
electra — ElectraTokenizer or ElectraTokenizerFast (ELECTRA model)
ernie — BertTokenizer or BertTokenizerFast (ERNIE model)
ernie_m — ErnieMTokenizer (ErnieM model)
esm — EsmTokenizer (ESM model)
flaubert — FlaubertTokenizer (FlauBERT model)
fnet — FNetTokenizer or FNetTokenizerFast (FNet model)
fsmt — FSMTTokenizer (FairSeq Machine-Translation model)
funnel — FunnelTokenizer or FunnelTokenizerFast (Funnel Transformer model)
git — BertTokenizer or BertTokenizerFast (GIT model)
gpt-sw3 — GPTSw3Tokenizer (GPT-Sw3 model)
gpt2 — GPT2Tokenizer or GPT2TokenizerFast (OpenAI GPT-2 model)
gpt_bigcode — GPT2Tokenizer or GPT2TokenizerFast (GPTBigCode model)
gpt_neo — GPT2Tokenizer or GPT2TokenizerFast (GPT Neo model)
gpt_neox — GPTNeoXTokenizerFast (GPT NeoX model)
gpt_neox_japanese — GPTNeoXJapaneseTokenizer (GPT NeoX Japanese model)
gptj — GPT2Tokenizer or GPT2TokenizerFast (GPT-J model)
gptsan-japanese — GPTSanJapaneseTokenizer (GPTSAN-japanese model)
groupvit — CLIPTokenizer or CLIPTokenizerFast (GroupViT model)
herbert — HerbertTokenizer or HerbertTokenizerFast (HerBERT model)
hubert — Wav2Vec2CTCTokenizer (Hubert model)
ibert — RobertaTokenizer or RobertaTokenizerFast (I-BERT model)
idefics — LlamaTokenizerFast (IDEFICS model)
instructblip — GPT2Tokenizer or GPT2TokenizerFast (InstructBLIP model)
jukebox — JukeboxTokenizer (Jukebox model)
layoutlm — LayoutLMTokenizer or LayoutLMTokenizerFast (LayoutLM model)
layoutlmv2 — LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast (LayoutLMv2 model)
layoutlmv3 — LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LayoutLMv3 model)
layoutxlm — LayoutXLMTokenizer or LayoutXLMTokenizerFast (LayoutXLM model)
led — LEDTokenizer or LEDTokenizerFast (LED model)
lilt — LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LiLT model)
llama — LlamaTokenizer or LlamaTokenizerFast (LLaMA model)
longformer — LongformerTokenizer or LongformerTokenizerFast (Longformer model)
longt5 — T5Tokenizer or T5TokenizerFast (LongT5 model)
luke — LukeTokenizer (LUKE model)
lxmert — LxmertTokenizer or LxmertTokenizerFast (LXMERT model)
m2m_100 — M2M100Tokenizer (M2M100 model)
marian — MarianTokenizer (Marian model)
mbart — MBartTokenizer or MBartTokenizerFast (mBART model)
mbart50 — MBart50Tokenizer or MBart50TokenizerFast (mBART-50 model)
mega — RobertaTokenizer or RobertaTokenizerFast (MEGA model)
megatron-bert — BertTokenizer or BertTokenizerFast (Megatron-BERT model)
mgp-str — MgpstrTokenizer (MGP-STR model)
mistral — LlamaTokenizer or LlamaTokenizerFast (Mistral model)
mluke — MLukeTokenizer (mLUKE model)
mobilebert — MobileBertTokenizer or MobileBertTokenizerFast (MobileBERT model)
mpnet — MPNetTokenizer or MPNetTokenizerFast (MPNet model)
mpt — GPTNeoXTokenizerFast (MPT model)
mra — RobertaTokenizer or RobertaTokenizerFast (MRA model)
mt5 — MT5Tokenizer or MT5TokenizerFast (MT5 model)
musicgen — T5Tokenizer or T5TokenizerFast (MusicGen model)
mvp — MvpTokenizer or MvpTokenizerFast (MVP model)
nezha — BertTokenizer or BertTokenizerFast (Nezha model)
nllb — NllbTokenizer or NllbTokenizerFast (NLLB model)
nllb-moe — NllbTokenizer or NllbTokenizerFast (NLLB-MOE model)
nystromformer — AlbertTokenizer or AlbertTokenizerFast (Nyströmformer model)
oneformer — CLIPTokenizer or CLIPTokenizerFast (OneFormer model)
openai-gpt — OpenAIGPTTokenizer or OpenAIGPTTokenizerFast (OpenAI GPT model)
opt — GPT2Tokenizer or GPT2TokenizerFast (OPT model)
owlvit — CLIPTokenizer or CLIPTokenizerFast (OWL-ViT model)
pegasus — PegasusTokenizer or PegasusTokenizerFast (Pegasus model)
pegasus_x — PegasusTokenizer or PegasusTokenizerFast (PEGASUS-X model)
perceiver — PerceiverTokenizer (Perceiver model)
persimmon — LlamaTokenizer or LlamaTokenizerFast (Persimmon model)
phobert — PhobertTokenizer (PhoBERT model)
pix2struct — T5Tokenizer or T5TokenizerFast (Pix2Struct model)
plbart — PLBartTokenizer (PLBart model)
prophetnet — ProphetNetTokenizer (ProphetNet model)
qdqbert — BertTokenizer or BertTokenizerFast (QDQBert model)
rag — RagTokenizer (RAG model)
realm — RealmTokenizer or RealmTokenizerFast (REALM model)
reformer — ReformerTokenizer or ReformerTokenizerFast (Reformer model)
rembert — RemBertTokenizer or RemBertTokenizerFast (RemBERT model)
retribert — RetriBertTokenizer or RetriBertTokenizerFast (RetriBERT model)
roberta — RobertaTokenizer or RobertaTokenizerFast (RoBERTa model)
roberta-prelayernorm — RobertaTokenizer or RobertaTokenizerFast (RoBERTa-PreLayerNorm model)
roc_bert — RoCBertTokenizer (RoCBert model)
roformer — RoFormerTokenizer or RoFormerTokenizerFast (RoFormer model)
rwkv — GPTNeoXTokenizerFast (RWKV model)
speech_to_text — Speech2TextTokenizer (Speech2Text model)
speech_to_text_2 — Speech2Text2Tokenizer (Speech2Text2 model)
speecht5 — SpeechT5Tokenizer (SpeechT5 model)
splinter — SplinterTokenizer or SplinterTokenizerFast (Splinter model)
squeezebert — SqueezeBertTokenizer or SqueezeBertTokenizerFast (SqueezeBERT model)
switch_transformers — T5Tokenizer or T5TokenizerFast (SwitchTransformers model)
t5 — T5Tokenizer or T5TokenizerFast (T5 model)
tapas — TapasTokenizer (TAPAS model)
tapex — TapexTokenizer (TAPEX model)
transfo-xl — TransfoXLTokenizer (Transformer-XL model)
umt5 — T5Tokenizer or T5TokenizerFast (UMT5 model)
vilt — BertTokenizer or BertTokenizerFast (ViLT model)
visual_bert — BertTokenizer or BertTokenizerFast (VisualBERT model)
vits — VitsTokenizer (VITS model)
wav2vec2 — Wav2Vec2CTCTokenizer (Wav2Vec2 model)
wav2vec2-conformer — Wav2Vec2CTCTokenizer (Wav2Vec2-Conformer model)
wav2vec2_phoneme — Wav2Vec2PhonemeCTCTokenizer (Wav2Vec2Phoneme model)
whisper — WhisperTokenizer or WhisperTokenizerFast (Whisper model)
xclip — CLIPTokenizer or CLIPTokenizerFast (X-CLIP model)
xglm — XGLMTokenizer or XGLMTokenizerFast (XGLM model)
xlm — XLMTokenizer (XLM model)
xlm-prophetnet — XLMProphetNetTokenizer (XLM-ProphetNet model)
xlm-roberta — XLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa model)
xlm-roberta-xl — XLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa-XL model)
xlnet — XLNetTokenizer or XLNetTokenizerFast (XLNet model)
xmod — XLMRobertaTokenizer or XLMRobertaTokenizerFast (X-MOD model)
yoso — AlbertTokenizer or AlbertTokenizerFast (YOSO model)
Examples:
Copied
register
( config_classslow_tokenizer_class = Nonefast_tokenizer_class = Noneexist_ok = False )
Parameters
config_class (PretrainedConfig) — The configuration corresponding to the model to register.
slow_tokenizer_class (
PretrainedTokenizer
, optional) — The slow tokenizer to register.fast_tokenizer_class (
PretrainedTokenizerFast
, optional) — The fast tokenizer to register.
Register a new tokenizer in this mapping.
Last updated