Encoding
Encoding
PythonRustNode
Encoding
class tokenizers.Encoding
( )
The Encoding represents the output of a Tokenizer.
propertyattention_mask
Returns
List[int]
The attention mask
The attention mask
This indicates to the LM which tokens should be attended to, and which should not. This is especially important when batching sequences, where we need to applying padding.
propertyids
Returns
List[int]
The list of IDs
The generated IDs
The IDs are the main input to a Language Model. They are the token indices, the numerical representations that a LM understands.
propertyn_sequences
Returns
int
The number of sequences in this Encoding
The number of sequences represented
propertyoffsets
Returns
A List
of Tuple[int, int]
The list of offsets
The offsets associated to each token
These offsets letβs you slice the input string, and thus retrieve the original part that led to producing the corresponding token.
propertyoverflowing
A List
of overflowing Encoding
When using truncation, the Tokenizer takes care of splitting the output into as many pieces as required to match the specified maximum length. This field lets you retrieve all the subsequent pieces.
When you use pairs of sequences, the overflowing pieces will contain enough variations to cover all the possible combinations, while respecting the provided maximum length.
propertysequence_ids
Returns
A List
of Optional[int]
A list of optional sequence index.
The generated sequence indices.
They represent the index of the input sequence associated to each token. The sequence id can be None if the token is not related to any input sequence, like for example with special tokens.
propertyspecial_tokens_mask
Returns
List[int]
The special tokens mask
The special token mask
This indicates which tokens are special tokens, and which are not.
propertytokens
Returns
List[str]
The list of tokens
The generated tokens
They are the string representation of the IDs.
propertytype_ids
Returns
List[int]
The list of type ids
The generated type IDs
Generally used for tasks like sequence classification or question answering, these tokens let the LM know which input sequence corresponds to each tokens.
propertyword_ids
Returns
A List
of Optional[int]
A list of optional word index.
The generated word indices.
They represent the index of the word associated to each token. When the input is pre-tokenized, they correspond to the ID of the given input label, otherwise they correspond to the words indices as defined by the PreTokenizer that was used.
For special tokens and such (any token that was generated from something that was not part of the input), the output is None
propertywords
Returns
A List
of Optional[int]
A list of optional word index.
The generated word indices.
This is deprecated and will be removed in a future version. Please use ~tokenizers.Encoding.word_ids
instead.
They represent the index of the word associated to each token. When the input is pre-tokenized, they correspond to the ID of the given input label, otherwise they correspond to the words indices as defined by the PreTokenizer that was used.
For special tokens and such (any token that was generated from something that was not part of the input), the output is None
char_to_token
( char_possequence_index = 0 ) β int
Parameters
char_pos (
int
) β The position of a char in the input stringsequence_index (
int
, defaults to0
) β The index of the sequence that contains the target char
Returns
int
The index of the token that contains this char in the encoded sequence
Get the token that contains the char at the given position in the input sequence.
char_to_word
( char_possequence_index = 0 ) β int
Parameters
char_pos (
int
) β The position of a char in the input stringsequence_index (
int
, defaults to0
) β The index of the sequence that contains the target char
Returns
int
The index of the word that contains this char in the input sequence
Get the word that contains the char at the given position in the input sequence.
merge
( encodingsgrowing_offsets = True ) β Encoding
Parameters
encodings (A
List
of Encoding) β The list of encodings that should be merged in onegrowing_offsets (
bool
, defaults toTrue
) β Whether the offsets should accumulate while merging
Returns
The resulting Encoding
Merge the list of encodings into one final Encoding
pad
( lengthdirection = 'right'pad_id = 0pad_type_id = 0pad_token = '[PAD]' )
Parameters
length (
int
) β The desired lengthdirection β (
str
, defaults toright
): The expected padding direction. Can be eitherright
orleft
pad_id (
int
, defaults to0
) β The ID corresponding to the padding tokenpad_type_id (
int
, defaults to0
) β The type ID corresponding to the padding tokenpad_token (
str
, defaults to [PAD]) β The pad token to use
Pad the Encoding at the given length
set_sequence_id
( sequence_id )
Set the given sequence index
Set the given sequence index for the whole range of tokens contained in this Encoding.
token_to_chars
( token_index ) β Tuple[int, int]
Parameters
token_index (
int
) β The index of a token in the encoded sequence.
Returns
Tuple[int, int]
The token offsets (first, last + 1)
Get the offsets of the token at the given index.
The returned offsets are related to the input sequence that contains the token. In order to determine in which input sequence it belongs, you must call ~tokenizers.Encoding.token_to_sequence()
.
token_to_sequence
( token_index ) β int
Parameters
token_index (
int
) β The index of a token in the encoded sequence.
Returns
int
The sequence id of the given token
Get the index of the sequence represented by the given token.
In the general use case, this method returns 0
for a single sequence or the first sequence of a pair, and 1
for the second sequence of a pair
token_to_word
( token_index ) β int
Parameters
token_index (
int
) β The index of a token in the encoded sequence.
Returns
int
The index of the word in the relevant input sequence.
Get the index of the word that contains the token in one of the input sequences.
The returned word index is related to the input sequence that contains the token. In order to determine in which input sequence it belongs, you must call ~tokenizers.Encoding.token_to_sequence()
.
truncate
( max_lengthstride = 0direction = 'right' )
Parameters
max_length (
int
) β The desired lengthstride (
int
, defaults to0
) β The length of previous content to be included in each overflowing piecedirection (
str
, defaults toright
) β Truncate direction
Truncate the Encoding at the given length
If this Encoding represents multiple sequences, when truncating this information is lost. It will be considered as representing a single sequence.
word_to_chars
( word_indexsequence_index = 0 ) β Tuple[int, int]
Parameters
word_index (
int
) β The index of a word in one of the input sequences.sequence_index (
int
, defaults to0
) β The index of the sequence that contains the target word
Returns
Tuple[int, int]
The range of characters (span) (first, last + 1)
Get the offsets of the word at the given index in one of the input sequences.
word_to_tokens
( word_indexsequence_index = 0 ) β Tuple[int, int]
Parameters
word_index (
int
) β The index of a word in one of the input sequences.sequence_index (
int
, defaults to0
) β The index of the sequence that contains the target word
Returns
Tuple[int, int]
The range of tokens: (first, last + 1)
Get the encoded tokens corresponding to the word at the given index in one of the input sequences.
Last updated