Semantic segmentation
Semantic segmentation assigns a label or class to each individual pixel of an image. There are several types of segmentation, and in the case of semantic segmentation, no distinction is made between unique instances of the same object. Both objects are given the same label (for example, βcarβ instead of βcar-1β and βcar-2β). Common real-world applications of semantic segmentation include training self-driving cars to identify pedestrians and important traffic information, identifying cells and abnormalities in medical imagery, and monitoring environmental changes from satellite imagery.
This guide will show you how to:
Finetune SegFormer on the SceneParse150 dataset.
Use your finetuned model for inference.
The task illustrated in this tutorial is supported by the following model architectures:
BEiT, Data2VecVision, DPT, MobileNetV2, MobileViT, MobileViTV2, SegFormer, UPerNet
Before you begin, make sure you have all the necessary libraries installed:
Copied
pip install -q datasets transformers evaluateWe encourage you to log in to your BOINC AI account so you can upload and share your model with the community. When prompted, enter your token to log in:
Copied
>>> from boincai_hub import notebook_login
>>> notebook_login()Load SceneParse150 dataset
Start by loading a smaller subset of the SceneParse150 dataset from the π Datasets library. Thisβll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
Copied
Split the datasetβs train split into a train and test set with the train_test_split method:
Copied
Then take a look at an example:
Copied
image: a PIL image of the scene.annotation: a PIL image of the segmentation map, which is also the modelβs target.scene_category: a category id that describes the image scene like βkitchenβ or βofficeβ. In this guide, youβll only needimageandannotation, both of which are PIL images.
Youβll also want to create a dictionary that maps a label id to a label class which will be useful when you set up the model later. Download the mappings from the Hub and create the id2label and label2id dictionaries:
Copied
Preprocess
The next step is to load a SegFormer image processor to prepare the images and annotations for the model. Some datasets, like this one, use the zero-index as the background class. However, the background class isnβt actually included in the 150 classes, so youβll need to set reduce_labels=True to subtract one from all the labels. The zero-index is replaced by 255 so itβs ignored by SegFormerβs loss function:
Copied
PytorchHide Pytorch content
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, youβll use the ColorJitter function from torchvision to randomly change the color properties of an image, but you can also use any image library you like.
Copied
Now create two preprocessing functions to prepare the images and annotations for the model. These functions convert the images into pixel_values and annotations to labels. For the training set, jitter is applied before providing the images to the image processor. For the test set, the image processor crops and normalizes the images, and only crops the labels because no data augmentation is applied during testing.
Copied
To apply the jitter over the entire dataset, use the π Datasets set_transform function. The transform is applied on the fly which is faster and consumes less disk space:
Copied
TensorFlowHide TensorFlow content
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, youβll use tf.image to randomly change the color properties of an image, but you can also use any image library you like. Define two separate transformation functions:
training data transformations that include image augmentation
validation data transformations that only transpose the images, since computer vision models in π Transformers expect channels-first layout
Copied
Next, create two preprocessing functions to prepare batches of images and annotations for the model. These functions apply the image transformations and use the earlier loaded image_processor to convert the images into pixel_values and annotations to labels. ImageProcessor also takes care of resizing and normalizing the images.
Copied
To apply the preprocessing transformations over the entire dataset, use the π Datasets set_transform function. The transform is applied on the fly which is faster and consumes less disk space:
Copied
Evaluate
Including a metric during training is often helpful for evaluating your modelβs performance. You can quickly load an evaluation method with the π Evaluate library. For this task, load the mean Intersection over Union (IoU) metric (see the π Evaluate quick tour to learn more about how to load and compute a metric):
Copied
Then create a function to compute the metrics. Your predictions need to be converted to logits first, and then reshaped to match the size of the labels before you can call compute:
PytorchHide Pytorch contentCopied
TensorFlowHide TensorFlow contentCopied
Your compute_metrics function is ready to go now, and youβll return to it when you setup your training.
Train
PytorchHide Pytorch content
If you arenβt familiar with finetuning a model with the Trainer, take a look at the basic tutorial here!
Youβre ready to start training your model now! Load SegFormer with AutoModelForSemanticSegmentation, and pass the model the mapping between label ids and label classes:
Copied
At this point, only three steps remain:
Define your training hyperparameters in TrainingArguments. It is important you donβt remove unused columns because thisβll drop the
imagecolumn. Without theimagecolumn, you canβt createpixel_values. Setremove_unused_columns=Falseto prevent this behavior! The only other required parameter isoutput_dirwhich specifies where to save your model. Youβll push this model to the Hub by settingpush_to_hub=True(you need to be signed in to BOINC AI to upload your model). At the end of each epoch, the Trainer will evaluate the IoU metric and save the training checkpoint.Pass the training arguments to Trainer along with the model, dataset, tokenizer, data collator, and
compute_metricsfunction.Call train() to finetune your model.
Copied
Once training is completed, share your model to the Hub with the push_to_hub() method so everyone can use your model:
Copied
TensorFlowHide TensorFlow content
If you are unfamiliar with fine-tuning a model with Keras, check out the basic tutorial first!
To fine-tune a model in TensorFlow, follow these steps:
Define the training hyperparameters, and set up an optimizer and a learning rate schedule.
Instantiate a pretrained model.
Convert a π Dataset to a
tf.data.Dataset.Compile your model.
Add callbacks to calculate metrics and upload your model to π Hub
Use the
fit()method to run the training.
Start by defining the hyperparameters, optimizer and learning rate schedule:
Copied
Then, load SegFormer with TFAutoModelForSemanticSegmentation along with the label mappings, and compile it with the optimizer. Note that Transformers models all have a default task-relevant loss function, so you donβt need to specify one unless you want to:
Copied
Convert your datasets to the tf.data.Dataset format using the to_tf_dataset and the DefaultDataCollator:
Copied
To compute the accuracy from the predictions and push your model to the π Hub, use Keras callbacks. Pass your compute_metrics function to KerasMetricCallback, and use the PushToHubCallback to upload the model:
Copied
Finally, you are ready to train your model! Call fit() with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model:
Copied
Congratulations! You have fine-tuned your model and shared it on the π Hub. You can now use it for inference!
Inference
Great, now that youβve finetuned a model, you can use it for inference!
Load an image for inference:
Copied

PytorchHide Pytorch content
The simplest way to try out your finetuned model for inference is to use it in a pipeline(). Instantiate a pipeline for image segmentation with your model, and pass your image to it:
Copied
You can also manually replicate the results of the pipeline if youβd like. Process the image with an image processor and place the pixel_values on a GPU:
Copied
Pass your input to the model and return the logits:
Copied
Next, rescale the logits to the original image size:
Copied
TensorFlowHide TensorFlow content
Load an image processor to preprocess the image and return the input as TensorFlow tensors:
Copied
Pass your input to the model and return the logits:
Copied
Next, rescale the logits to the original image size and apply argmax on the class dimension:
Copied
To visualize the results, load the dataset color palette as ade_palette() that maps each class to their RGB values. Then you can combine and plot your image and the predicted segmentation map:
Copied

Last updated