Pre-tokenizers
Pre-tokenizers
PythonRustNode
BertPreTokenizer
class tokenizers.pre_tokenizers.BertPreTokenizer
( )
BertPreTokenizer
This pre-tokenizer splits tokens on spaces, and also on punctuation. Each occurence of a punctuation character will be treated separately.
ByteLevel
class tokenizers.pre_tokenizers.ByteLevel
( add_prefix_space = Trueuse_regex = True )
Parameters
add_prefix_space (
bool
, optional, defaults toTrue
) â Whether to add a space to the first word if there isnât already one. This lets us treat hello exactly like say hello.use_regex (
bool
, optional, defaults toTrue
) â Set this toFalse
to prevent this pre_tokenizer from using the GPT2 specific regexp for spliting on whitespace.
ByteLevel PreTokenizer
This pre-tokenizer takes care of replacing all bytes of the given string with a corresponding representation, as well as splitting into words.
alphabet
( ) â List[str]
Returns
List[str]
A list of characters that compose the alphabet
Returns the alphabet used by this PreTokenizer.
Since the ByteLevel works as its name suggests, at the byte level, it encodes each byte value to a unique visible character. This means that there is a total of 256 different characters composing this alphabet.
CharDelimiterSplit
class tokenizers.pre_tokenizers.CharDelimiterSplit
( )
This pre-tokenizer simply splits on the provided char. Works like .split(delimiter)
Digits
class tokenizers.pre_tokenizers.Digits
( individual_digits = False )
Parameters
individual_digits (
bool
, optional, defaults toFalse
) â
This pre-tokenizer simply splits using the digits in separate tokens
If set to True, digits will each be separated as follows:
Copied
If set to False, digits will grouped as follows:
Copied
Metaspace
class tokenizers.pre_tokenizers.Metaspace
( replacement = '_'add_prefix_space = True )
Parameters
replacement (
str
, optional, defaults toâ
) â The replacement character. Must be exactly one character. By default we use the â (U+2581) meta symbol (Same as in SentencePiece).add_prefix_space (
bool
, optional, defaults toTrue
) â Whether to add a space to the first word if there isnât already one. This lets us treat hello exactly like say hello.
Metaspace pre-tokenizer
This pre-tokenizer replaces any whitespace by the provided replacement character. It then tries to split on these spaces.
PreTokenizer
class tokenizers.pre_tokenizers.PreTokenizer
( )
Base class for all pre-tokenizers
This class is not supposed to be instantiated directly. Instead, any implementation of a PreTokenizer will return an instance of this class when instantiated.
pre_tokenize
( pretok )
Parameters
pretok (
~tokenizers.PreTokenizedString) -- The pre-tokenized string on which to apply this :class:
~tokenizers.pre_tokenizers.PreTokenizer`
Pre-tokenize a ~tokenizers.PyPreTokenizedString
in-place
This method allows to modify a PreTokenizedString
to keep track of the pre-tokenization, and leverage the capabilities of the PreTokenizedString
. If you just want to see the result of the pre-tokenization of a raw string, you can use pre_tokenize_str()
pre_tokenize_str
( sequence ) â List[Tuple[str, Offsets]]
Parameters
sequence (
str
) â A string to pre-tokeize
Returns
List[Tuple[str, Offsets]]
A list of tuple with the pre-tokenized parts and their offsets
Pre tokenize the given string
This method provides a way to visualize the effect of a PreTokenizer but it does not keep track of the alignment, nor does it provide all the capabilities of the PreTokenizedString
. If you need some of these, you can use pre_tokenize()
Punctuation
class tokenizers.pre_tokenizers.Punctuation
( behavior = 'isolated' )
Parameters
behavior (
SplitDelimiterBehavior
) â The behavior to use when splitting. Choices: âremovedâ, âisolatedâ (default), âmerged_with_previousâ, âmerged_with_nextâ, âcontiguousâ
This pre-tokenizer simply splits on punctuation as individual characters.
Sequence
class tokenizers.pre_tokenizers.Sequence
( pretokenizers )
This pre-tokenizer composes other pre_tokenizers and applies them in sequence
Split
class tokenizers.pre_tokenizers.Split
( patternbehaviorinvert = False )
Parameters
pattern (
str
orRegex
) â A pattern used to split the string. Usually a string or a a regex built with tokenizers.Regexbehavior (
SplitDelimiterBehavior
) â The behavior to use when splitting. Choices: âremovedâ, âisolatedâ, âmerged_with_previousâ, âmerged_with_nextâ, âcontiguousâinvert (
bool
, optional, defaults toFalse
) â Whether to invert the pattern.
Split PreTokenizer
This versatile pre-tokenizer splits using the provided pattern and according to the provided behavior. The pattern can be inverted by making use of the invert flag.
UnicodeScripts
class tokenizers.pre_tokenizers.UnicodeScripts
( )
This pre-tokenizer splits on characters that belong to different language family It roughly follows https://github.com/google/sentencepiece/blob/master/data/Scripts.txt Actually Hiragana and Katakana are fused with Han, and 0x30FC is Han too. This mimicks SentencePiece Unigram implementation.
Whitespace
class tokenizers.pre_tokenizers.Whitespace
( )
This pre-tokenizer simply splits using the following regex: \w+|[^\w\s]+
WhitespaceSplit
class tokenizers.pre_tokenizers.WhitespaceSplit
( )
This pre-tokenizer simply splits on the whitespace. Works like .split()
Last updated