Question answering
Question answering tasks return an answer given a question. If you’ve ever asked a virtual assistant like Alexa, Siri or Google what the weather is, then you’ve used a question answering model before. There are two common types of question answering tasks:
Extractive: extract the answer from the given context.
Abstractive: generate an answer from the context that correctly answers the question.
This guide will show you how to:
Finetune DistilBERT on the SQuAD dataset for extractive question answering.
Use your finetuned model for inference.
The task illustrated in this tutorial is supported by the following model architectures:
ALBERT, BART, BERT, BigBird, BigBird-Pegasus, BLOOM, CamemBERT, CANINE, ConvBERT, Data2VecText, DeBERTa, DeBERTa-v2, DistilBERT, ELECTRA, ERNIE, ErnieM, Falcon, FlauBERT, FNet, Funnel Transformer, OpenAI GPT-2, GPT Neo, GPT NeoX, GPT-J, I-BERT, LayoutLMv2, LayoutLMv3, LED, LiLT, Longformer, LUKE, LXMERT, MarkupLM, mBART, MEGA, Megatron-BERT, MobileBERT, MPNet, MPT, MRA, MT5, MVP, Nezha, Nyströmformer, OPT, QDQBert, Reformer, RemBERT, RoBERTa, RoBERTa-PreLayerNorm, RoCBert, RoFormer, Splinter, SqueezeBERT, T5, UMT5, XLM, XLM-RoBERTa, XLM-RoBERTa-XL, XLNet, X-MOD, YOSO
Before you begin, make sure you have all the necessary libraries installed:
Copied
We encourage you to login to your BOINC AI account so you can upload and share your model with the community. When prompted, enter your token to login:
Copied
Load SQuAD dataset
Start by loading a smaller subset of the SQuAD dataset from the 🌍 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
Copied
Split the dataset’s train
split into a train and test set with the train_test_split method:
Copied
Then take a look at an example:
Copied
There are several important fields here:
answers
: the starting location of the answer token and the answer text.context
: background information from which the model needs to extract the answer.question
: the question a model should answer.
Preprocess
The next step is to load a DistilBERT tokenizer to process the question
and context
fields:
Copied
There are a few preprocessing steps particular to question answering tasks you should be aware of:
Some examples in a dataset may have a very long
context
that exceeds the maximum input length of the model. To deal with longer sequences, truncate only thecontext
by settingtruncation="only_second"
.Next, map the start and end positions of the answer to the original
context
by settingreturn_offset_mapping=True
.With the mapping in hand, now you can find the start and end tokens of the answer. Use the
sequence_ids
method to find which part of the offset corresponds to thequestion
and which corresponds to thecontext
.
Here is how you can create a function to truncate and map the start and end tokens of the answer
to the context
:
Copied
To apply the preprocessing function over the entire dataset, use 🌍 Datasets map function. You can speed up the map
function by setting batched=True
to process multiple elements of the dataset at once. Remove any columns you don’t need:
Copied
Now create a batch of examples using DefaultDataCollator. Unlike other data collators in 🌍 Transformers, the DefaultDataCollator does not apply any additional preprocessing such as padding.
PytorchHide Pytorch contentCopied
TensorFlowHide TensorFlow contentCopied
Train
PytorchHide Pytorch content
If you aren’t familiar with finetuning a model with the Trainer, take a look at the basic tutorial here!
You’re ready to start training your model now! Load DistilBERT with AutoModelForQuestionAnswering:
Copied
At this point, only three steps remain:
Define your training hyperparameters in TrainingArguments. The only required parameter is
output_dir
which specifies where to save your model. You’ll push this model to the Hub by settingpush_to_hub=True
(you need to be signed in to BOINC AI to upload your model).Pass the training arguments to Trainer along with the model, dataset, tokenizer, and data collator.
Call train() to finetune your model.
Copied
Once training is completed, share your model to the Hub with the push_to_hub() method so everyone can use your model:
Copied
TensorFlowHide TensorFlow content
If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:Copied
Then you can load DistilBERT with TFAutoModelForQuestionAnswering:
Copied
Convert your datasets to the tf.data.Dataset
format with prepare_tf_dataset():
Copied
Configure the model for training with compile
:
Copied
The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the PushToHubCallback:
Copied
Finally, you’re ready to start training your model! Call fit
with your training and validation datasets, the number of epochs, and your callback to finetune the model:
Copied
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding PyTorch notebook or TensorFlow notebook.
Evaluate
Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The Trainer still calculates the evaluation loss during training so you’re not completely in the dark about your model’s performance.
If have more time and you’re interested in how to evaluate your model for question answering, take a look at the Question answering chapter from the 🌍 BOINC AI Course!
Inference
Great, now that you’ve finetuned a model, you can use it for inference!
Come up with a question and some context you’d like the model to predict:
Copied
The simplest way to try out your finetuned model for inference is to use it in a pipeline(). Instantiate a pipeline
for question answering with your model, and pass your text to it:
Copied
You can also manually replicate the results of the pipeline
if you’d like:
PytorchHide Pytorch content
Tokenize the text and return PyTorch tensors:
Copied
Pass your inputs to the model and return the logits
:
Copied
Get the highest probability from the model output for the start and end positions:
Copied
Decode the predicted tokens to get the answer:
Copied
TensorFlowHide TensorFlow content
Tokenize the text and return TensorFlow tensors:
Copied
Pass your inputs to the model and return the logits
:
Copied
Get the highest probability from the model output for the start and end positions:
Copied
Decode the predicted tokens to get the answer:
Copied
Last updated