Use fast tokenizers from BOINC AI Tokenizers

The PreTrainedTokenizerFast depends on the ๐ŸŒŽ Tokenizers library. The tokenizers obtained from the ๐ŸŒŽ Tokenizers library can be loaded very simply into ๐ŸŒŽ Transformers.

Before getting in the specifics, letโ€™s first start by creating a dummy tokenizer in a few lines:

Copied

>>> from tokenizers import Tokenizer
>>> from tokenizers.models import BPE
>>> from tokenizers.trainers import BpeTrainer
>>> from tokenizers.pre_tokenizers import Whitespace

>>> tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
>>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])

>>> tokenizer.pre_tokenizer = Whitespace()
>>> files = [...]
>>> tokenizer.train(files, trainer)

We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to a JSON file for future re-use.

Loading directly from the tokenizer object

Letโ€™s see how to leverage this tokenizer object in the ๐ŸŒŽ Transformers library. The PreTrainedTokenizerFast class allows for easy instantiation, by accepting the instantiated tokenizer object as an argument:

Copied

>>> from transformers import PreTrainedTokenizerFast

>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)

This object can now be used with all the methods shared by the ๐ŸŒŽ Transformers tokenizers! Head to the tokenizer page for more information.

Loading from a JSON file

In order to load a tokenizer from a JSON file, letโ€™s first start by saving our tokenizer:

Copied

>>> tokenizer.save("tokenizer.json")

The path to which we saved this file can be passed to the PreTrainedTokenizerFast initialization method using the tokenizer_file parameter:

Copied

>>> from transformers import PreTrainedTokenizerFast

>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")

This object can now be used with all the methods shared by the ๐ŸŒŽ Transformers tokenizers! Head to the tokenizer page for more information.

Last updated