Summarization

Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be:

  • Extractive: extract the most relevant information from a document.

  • Abstractive: generate new text that captures the most relevant information.

This guide will show you how to:

  1. Finetune T5 on the California state bill subset of the BillSum dataset for abstractive summarization.

  2. Use your finetuned model for inference.

The task illustrated in this tutorial is supported by the following model architectures:

BART, BigBird-Pegasus, Blenderbot, BlenderbotSmall, Encoder decoder, FairSeq Machine-Translation, GPTSAN-japanese, LED, LongT5, M2M100, Marian, mBART, MT5, MVP, NLLB, NLLB-MOE, Pegasus, PEGASUS-X, PLBart, ProphetNet, SwitchTransformers, T5, UMT5, XLM-ProphetNet

Before you begin, make sure you have all the necessary libraries installed:

Copied

pip install transformers datasets evaluate rouge_score

We encourage you to login to your BOINC AI account so you can upload and share your model with the community. When prompted, enter your token to login:

Copied

>>> from boincai_hub import notebook_login

>>> notebook_login()

Load BillSum dataset

Start by loading the smaller California state bill subset of the BillSum dataset from the 🌍Datasets library:

Copied

Split the dataset into a train and test set with the train_test_split method:

Copied

Then take a look at an example:

Copied

There are two fields that you’ll want to use:

  • text: the text of the bill which’ll be the input to the model.

  • summary: a condensed version of text which’ll be the model target.

Preprocess

The next step is to load a T5 tokenizer to process text and summary:

Copied

The preprocessing function you want to create needs to:

  1. Prefix the input with a prompt so T5 knows this is a summarization task. Some models capable of multiple NLP tasks require prompting for specific tasks.

  2. Use the keyword text_target argument when tokenizing labels.

  3. Truncate sequences to be no longer than the maximum length set by the max_length parameter.

Copied

To apply the preprocessing function over the entire dataset, use 🌍 Datasets map method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once:

Copied

Now create a batch of examples using DataCollatorForSeq2Seq. It’s more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.

PytorchHide Pytorch contentCopied

TensorFlowHide TensorFlow contentCopied

Evaluate

Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🌍 Evaluate library. For this task, load the ROUGE metric (see the 🌍 Evaluate quick tour to learn more about how to load and compute a metric):

Copied

Then create a function that passes your predictions and labels to compute to calculate the ROUGE metric:

Copied

Your compute_metrics function is ready to go now, and you’ll return to it when you setup your training.

Train

PytorchHide Pytorch content

If you aren’t familiar with finetuning a model with the Trainer, take a look at the basic tutorial here!

You’re ready to start training your model now! Load T5 with AutoModelForSeq2SeqLM:

Copied

At this point, only three steps remain:

  1. Define your training hyperparameters in Seq2SeqTrainingArguments. The only required parameter is output_dir which specifies where to save your model. You’ll push this model to the Hub by setting push_to_hub=True (you need to be signed in to BOINC AI to upload your model). At the end of each epoch, the Trainer will evaluate the ROUGE metric and save the training checkpoint.

  2. Pass the training arguments to Seq2SeqTrainer along with the model, dataset, tokenizer, data collator, and compute_metrics function.

  3. Call train() to finetune your model.

Copied

Once training is completed, share your model to the Hub with the push_to_hub() method so everyone can use your model:

Copied

TensorFlowHide TensorFlow content

If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial here!

To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:Copied

Then you can load T5 with TFAutoModelForSeq2SeqLM:

Copied

Convert your datasets to the tf.data.Dataset format with prepare_tf_dataset():

Copied

Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:

Copied

The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks.

Pass your compute_metrics function to KerasMetricCallback:

Copied

Specify where to push your model and tokenizer in the PushToHubCallback:

Copied

Then bundle your callbacks together:

Copied

Finally, you’re ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:

Copied

Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!

For a more in-depth example of how to finetune a model for summarization, take a look at the corresponding PyTorch notebook or TensorFlow notebook.

Inference

Great, now that you’ve finetuned a model, you can use it for inference!

Come up with some text you’d like to summarize. For T5, you need to prefix your input depending on the task you’re working on. For summarization you should prefix your input as shown below:

Copied

The simplest way to try out your finetuned model for inference is to use it in a pipeline(). Instantiate a pipeline for summarization with your model, and pass your text to it:

Copied

You can also manually replicate the results of the pipeline if you’d like:

PytorchHide Pytorch content

Tokenize the text and return the input_ids as PyTorch tensors:

Copied

Use the generate() method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the Text Generation API.

Copied

Decode the generated token ids back into text:

Copied

TensorFlowHide TensorFlow content

Tokenize the text and return the input_ids as TensorFlow tensors:

Copied

Use the generate() method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the Text Generation API.

Copied

Decode the generated token ids back into text:

Last updated