Data Collator
Last updated
Last updated
Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of the same type as the elements of train_dataset
or eval_dataset
.
To be able to build batches, data collators may apply some processing (like padding). Some of them (like ) also apply some random data augmentation (like random masking) on the formed batch.
Examples of use can be found in the or .
transformers.default_data_collator
( features: typing.List[InputDataClass]return_tensors = 'pt' )
Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:
label
: handles a single value (int or float) per object
label_ids
: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how itβs useful.
( return_tensors: str = 'pt' )
Parameters
return_tensors (str
) β The type of Tensor to return. Allowable values are βnpβ, βptβ and βtfβ.
Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:
label
: handles a single value (int or float) per object
label_ids
: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how itβs useful.
This is an object (like other data collators) rather than a pure function like default_data_collator. This can be helpful if you need to set a return_tensors value at initialization.
( tokenizer: PreTrainedTokenizerBasepadding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = Truemax_length: typing.Optional[int] = Nonepad_to_multiple_of: typing.Optional[int] = Nonereturn_tensors: str = 'pt' )
Parameters
True
or 'longest'
(default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).
'max_length'
: Pad to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided.
False
or 'do_not_pad'
: No padding (i.e., can output a batch with sequences of different lengths).
max_length (int
, optional) β Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (int
, optional) β If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str
) β The type of Tensor to return. Allowable values are βnpβ, βptβ and βtfβ.
Data collator that will dynamically pad the inputs received.
( tokenizer: PreTrainedTokenizerBasepadding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = Truemax_length: typing.Optional[int] = Nonepad_to_multiple_of: typing.Optional[int] = Nonelabel_pad_token_id: int = -100return_tensors: str = 'pt' )
Parameters
True
or 'longest'
(default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).
'max_length'
: Pad to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided.
False
or 'do_not_pad'
: No padding (i.e., can output a batch with sequences of different lengths).
max_length (int
, optional) β Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (int
, optional) β If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
label_pad_token_id (int
, optional, defaults to -100) β The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions).
return_tensors (str
) β The type of Tensor to return. Allowable values are βnpβ, βptβ and βtfβ.
Data collator that will dynamically pad the inputs received, as well as the labels.
( tokenizer: PreTrainedTokenizerBasemodel: typing.Optional[typing.Any] = Nonepadding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = Truemax_length: typing.Optional[int] = Nonepad_to_multiple_of: typing.Optional[int] = Nonelabel_pad_token_id: int = -100return_tensors: str = 'pt' )
Parameters
This is useful when using label_smoothing to avoid calculating loss twice.
True
or 'longest'
(default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).
'max_length'
: Pad to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided.
False
or 'do_not_pad'
: No padding (i.e., can output a batch with sequences of different lengths).
max_length (int
, optional) β Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (int
, optional) β If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
label_pad_token_id (int
, optional, defaults to -100) β The id to use when padding the labels (-100 will be automatically ignored by PyTorch loss functions).
return_tensors (str
) β The type of Tensor to return. Allowable values are βnpβ, βptβ and βtfβ.
Data collator that will dynamically pad the inputs received, as well as the labels.
( tokenizer: PreTrainedTokenizerBasemlm: bool = Truemlm_probability: float = 0.15pad_to_multiple_of: typing.Optional[int] = Nonetf_experimental_compile: bool = Falsereturn_tensors: str = 'pt' )
Parameters
mlm (bool
, optional, defaults to True
) β Whether or not to use masked language modeling. If set to False
, the labels are the same as the inputs with the padding tokens ignored (by setting them to -100). Otherwise, the labels are -100 for non-masked tokens and the value to predict for the masked token.
mlm_probability (float
, optional, defaults to 0.15) β The probability with which to (randomly) mask tokens in the input, when mlm
is set to True
.
pad_to_multiple_of (int
, optional) β If set will pad the sequence to a multiple of the provided value.
return_tensors (str
) β The type of Tensor to return. Allowable values are βnpβ, βptβ and βtfβ.
Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they are not all of the same length.
numpy_mask_tokens
( inputs: typing.Anyspecial_tokens_mask: typing.Optional[typing.Any] = None )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
tf_mask_tokens
( inputs: typing.Anyvocab_sizemask_token_idspecial_tokens_mask: typing.Optional[typing.Any] = None )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
torch_mask_tokens
( inputs: typing.Anyspecial_tokens_mask: typing.Optional[typing.Any] = None )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
( tokenizer: PreTrainedTokenizerBasemlm: bool = Truemlm_probability: float = 0.15pad_to_multiple_of: typing.Optional[int] = Nonetf_experimental_compile: bool = Falsereturn_tensors: str = 'pt' )
Data collator used for language modeling that masks entire words.
collates batches of tensors, honoring their tokenizerβs pad_token
preprocesses batches for masked language modeling
numpy_mask_tokens
( inputs: typing.Anymask_labels: typing.Any )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set βmask_labelsβ means we use whole word mask (wwm), we directly mask idxs according to itβs ref.
tf_mask_tokens
( inputs: typing.Anymask_labels: typing.Any )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set βmask_labelsβ means we use whole word mask (wwm), we directly mask idxs according to itβs ref.
torch_mask_tokens
( inputs: typing.Anymask_labels: typing.Any )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set βmask_labelsβ means we use whole word mask (wwm), we directly mask idxs according to itβs ref.
( tokenizer: PreTrainedTokenizerBaseplm_probability: float = 0.16666666666666666max_span_length: int = 5return_tensors: str = 'pt' )
Data collator used for permutation language modeling.
collates batches of tensors, honoring their tokenizerβs pad_token
preprocesses batches for permutation language modeling with procedures specific to XLNet
numpy_mask_tokens
( inputs: typing.Any )
The masked tokens to be predicted for a particular sequence are determined by the following algorithm:
Start from the beginning of the sequence by setting cur_len = 0
(number of tokens processed so far).
Sample a span_length
from the interval [1, max_span_length]
(length of span of tokens to be masked)
Reserve a context of length context_length = span_length / plm_probability
to surround span to be masked
Sample a starting point start_index
from the interval [cur_len, cur_len + context_length - span_length]
and mask tokens start_index:start_index + span_length
Set cur_len = cur_len + context_length
. If cur_len < max_len
(i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1.
tf_mask_tokens
( inputs: typing.Any )
The masked tokens to be predicted for a particular sequence are determined by the following algorithm:
Start from the beginning of the sequence by setting cur_len = 0
(number of tokens processed so far).
Sample a span_length
from the interval [1, max_span_length]
(length of span of tokens to be masked)
Reserve a context of length context_length = span_length / plm_probability
to surround span to be masked
Sample a starting point start_index
from the interval [cur_len, cur_len + context_length - span_length]
and mask tokens start_index:start_index + span_length
Set cur_len = cur_len + context_length
. If cur_len < max_len
(i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1.
torch_mask_tokens
( inputs: typing.Any )
The masked tokens to be predicted for a particular sequence are determined by the following algorithm:
Start from the beginning of the sequence by setting cur_len = 0
(number of tokens processed so far).
Sample a span_length
from the interval [1, max_span_length]
(length of span of tokens to be masked)
Reserve a context of length context_length = span_length / plm_probability
to surround span to be masked
Sample a starting point start_index
from the interval [cur_len, cur_len + context_length - span_length]
and mask tokens start_index:start_index + span_length
Set cur_len = cur_len + context_length
. If cur_len < max_len
(i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1.
tokenizer ( or ) β The tokenizer used for encoding the data.
padding (bool
, str
or , optional, defaults to True
) β Select a strategy to pad the returned sequences (according to the modelβs padding side and padding index) among:
tokenizer ( or ) β The tokenizer used for encoding the data.
padding (bool
, str
or , optional, defaults to True
) β Select a strategy to pad the returned sequences (according to the modelβs padding side and padding index) among:
tokenizer ( or ) β The tokenizer used for encoding the data.
model () β The model that is being trained. If set and has the prepare_decoder_input_ids_from_labels, use it to prepare the decoder_input_ids
padding (bool
, str
or , optional, defaults to True
) β Select a strategy to pad the returned sequences (according to the modelβs padding side and padding index) among:
tokenizer ( or ) β The tokenizer used for encoding the data.
For best performance, this data collator should be used with a dataset having items that are dictionaries or BatchEncoding, with the "special_tokens_mask"
key, as returned by a or a with the argument return_special_tokens_mask=True
.
This collator relies on details of the implementation of subword tokenization by , specifically that subword tokens are prefixed with ##. For tokenizers that do not adhere to this scheme, this collator will produce an output that is roughly equivalent to .DataCollatorForLanguageModeling
.