Preprocess data
Preprocess
Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, they need to be converted and assembled into batches of tensors. 🌎 Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you’ll learn that for:
Text, use a Tokenizer to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors.
Speech and audio, use a Feature extractor to extract sequential features from audio waveforms and convert them into tensors.
Image inputs use a ImageProcessor to convert images into tensors.
Multimodal inputs, use a Processor to combine a tokenizer and a feature extractor or image processor.
AutoProcessor
always works and automatically chooses the correct class for the model you’re using, whether you’re using a tokenizer, image processor, feature extractor or processor.
Before you begin, install 🌎 Datasets so you can load some datasets to experiment with:
Copied
Natural Language Processing
The main tool for preprocessing textual data is a tokenizer. A tokenizer splits text into tokens according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer.
If you plan on using a pretrained model, it’s important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the vocab) during pretraining.
Get started by loading a pretrained tokenizer with the AutoTokenizer.from_pretrained() method. This downloads the vocab a model was pretrained with:
Copied
Then pass your text to the tokenizer:
Copied
The tokenizer returns a dictionary with three important items:
input_ids are the indices corresponding to each token in the sentence.
attention_mask indicates whether a token should be attended to or not.
token_type_ids identifies which sequence a token belongs to when there is more than one sequence.
Return your input by decoding the input_ids
:
Copied
As you can see, the tokenizer added two special tokens - CLS
and SEP
(classifier and separator) - to the sentence. Not all models need special tokens, but if they do, the tokenizer automatically adds them for you.
If there are several sentences you want to preprocess, pass them as a list to the tokenizer:
Copied
Pad
Sentences aren’t always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special padding token to shorter sentences.
Set the padding
parameter to True
to pad the shorter sequences in the batch to match the longest sequence:
Copied
The first and third sentences are now padded with 0
’s because they are shorter.
Truncation
On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you’ll need to truncate the sequence to a shorter length.
Set the truncation
parameter to True
to truncate a sequence to the maximum length accepted by the model:
Copied
Check out the Padding and truncation concept guide to learn more different padding and truncation arguments.
Build tensors
Finally, you want the tokenizer to return the actual tensors that get fed to the model.
Set the return_tensors
parameter to either pt
for PyTorch, or tf
for TensorFlow:
PytorchHide Pytorch contentCopied
TensorFlowHide TensorFlow contentCopied
Audio
For audio tasks, you’ll need a feature extractor to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors.
Load the MInDS-14 dataset (see the 🌎 Datasets tutorial for more details on how to load a dataset) to see how you can use a feature extractor with audio datasets:
Copied
Access the first element of the audio
column to take a look at the input. Calling the audio
column automatically loads and resamples the audio file:
Copied
This returns three items:
array
is the speech signal loaded - and potentially resampled - as a 1D array.path
points to the location of the audio file.sampling_rate
refers to how many data points in the speech signal are measured per second.
For this tutorial, you’ll use the Wav2Vec2 model. Take a look at the model card, and you’ll learn Wav2Vec2 is pretrained on 16kHz sampled speech audio. It is important your audio data’s sampling rate matches the sampling rate of the dataset used to pretrain the model. If your data’s sampling rate isn’t the same, then you need to resample your data.
Use 🌎 Datasets’ cast_column method to upsample the sampling rate to 16kHz:
Copied
Call the
audio
column again to resample the audio file:
Copied
Next, load a feature extractor to normalize and pad the input. When padding textual data, a 0
is added for shorter sequences. The same idea applies to audio data. The feature extractor adds a 0
- interpreted as silence - to array
.
Load the feature extractor with AutoFeatureExtractor.from_pretrained():
Copied
Pass the audio array
to the feature extractor. We also recommend adding the sampling_rate
argument in the feature extractor in order to better debug any silent errors that may occur.
Copied
Just like the tokenizer, you can apply padding or truncation to handle variable sequences in a batch. Take a look at the sequence length of these two audio samples:
Copied
Create a function to preprocess the dataset so the audio samples are the same lengths. Specify a maximum sample length, and the feature extractor will either pad or truncate the sequences to match it:
Copied
Apply the preprocess_function
to the the first few examples in the dataset:
Copied
The sample lengths are now the same and match the specified maximum length. You can pass your processed dataset to the model now!
Copied
Computer vision
For computer vision tasks, you’ll need an image processor to prepare your dataset for the model. Image preprocessing consists of several steps that convert images into the input expected by the model. These steps include but are not limited to resizing, normalizing, color channel correction, and converting images to tensors.
Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation transform image data, but they serve different purposes:
Image augmentation alters images in a way that can help prevent overfitting and increase the robustness of the model. You can get creative in how you augment your data - adjust brightness and colors, crop, rotate, resize, zoom, etc. However, be mindful not to change the meaning of the images with your augmentations.
Image preprocessing guarantees that the images match the model’s expected input format. When fine-tuning a computer vision model, images must be preprocessed exactly as when the model was initially trained.
You can use any library you like for image augmentation. For image preprocessing, use the ImageProcessor
associated with the model.
Load the food101 dataset (see the 🌎 Datasets tutorial for more details on how to load a dataset) to see how you can use an image processor with computer vision datasets:
Use 🌎 Datasets split
parameter to only load a small sample from the training split since the dataset is quite large!
Copied
Next, take a look at the image with 🌎 Datasets Image
feature:
Copied
Load the image processor with AutoImageProcessor.from_pretrained():
Copied
First, let’s add some image augmentation. You can use any library you prefer, but in this tutorial, we’ll use torchvision’s transforms
module. If you’re interested in using another data augmentation library, learn how in the Albumentations or Kornia notebooks.
Here we use
Compose
to chain together a couple of transforms -RandomResizedCrop
andColorJitter
. Note that for resizing, we can get the image size requirements from theimage_processor
. For some models, an exact height and width are expected, for others only theshortest_edge
is defined.
Copied
The model accepts
pixel_values
as its input.ImageProcessor
can take care of normalizing the images, and generating appropriate tensors. Create a function that combines image augmentation and image preprocessing for a batch of images and generatespixel_values
:
Copied
In the example above we set do_resize=False
because we have already resized the images in the image augmentation transformation, and leveraged the size
attribute from the appropriate image_processor
. If you do not resize images during image augmentation, leave this parameter out. By default, ImageProcessor
will handle the resizing.
If you wish to normalize images as a part of the augmentation transformation, use the image_processor.image_mean
, and image_processor.image_std
values.
Then use 🌎 Datasets
set_transform
to apply the transforms on the fly:
Copied
Now when you access the image, you’ll notice the image processor has added
pixel_values
. You can pass your processed dataset to the model now!
Copied
Here is what the image looks like after the transforms are applied. The image has been randomly cropped and it’s color properties are different.
Copied
For tasks like object detection, semantic segmentation, instance segmentation, and panoptic segmentation, ImageProcessor
offers post processing methods. These methods convert model’s raw outputs into meaningful predictions such as bounding boxes, or segmentation maps.
Pad
In some cases, for instance, when fine-tuning DETR, the model applies scale augmentation at training time. This may cause images to be different sizes in a batch. You can use DetrImageProcessor.pad()
from DetrImageProcessor and define a custom collate_fn
to batch images together.
Copied
Multimodal
For tasks involving multimodal inputs, you’ll need a processor to prepare your dataset for the model. A processor couples together two processing objects such as as tokenizer and feature extractor.
Load the LJ Speech dataset (see the 🌎 Datasets tutorial for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR):
Copied
For ASR, you’re mainly focused on audio
and text
so you can remove the other columns:
Copied
Now take a look at the audio
and text
columns:
Copied
Remember you should always resample your audio dataset’s sampling rate to match the sampling rate of the dataset used to pretrain a model!
Copied
Load a processor with AutoProcessor.from_pretrained():
Copied
Create a function to process the audio data contained in
array
toinput_values
, and tokenizetext
tolabels
. These are the inputs to the model:
Copied
Apply the
prepare_dataset
function to a sample:
Copied
The processor has now added input_values
and labels
, and the sampling rate has also been correctly downsampled to 16kHz. You can pass your processed dataset to the model now!
Last updated