Sentiment Tuning

Sentiment Tuning Examples

The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as lvwerra/distilbert-imdb).

Hereโ€™s an overview of the notebooks and scripts in the trl repository:

File
Description

This script shows how to use the PPOTrainer to fine-tune a sentiment analysis model using IMDB dataset

This notebook demonstrates how to reproduce the GPT2 imdb sentiment tuning example on a jupyter notebook.

This notebook demonstrates how to reproduce the GPT2 sentiment control example on a jupyter notebook.

Usage

Copied

# 1. run directly
python examples/scripts/ppo.py
# 2. run via `accelerate` (recommended), enabling more features (e.g., multiple GPUs, deepspeed)
accelerate config # will prompt you to define the training configuration
accelerate launch examples/scripts/ppo.py # launches training
# 3. get help text and documentation
python examples/scripts/ppo.py --help
# 4. configure logging with wandb and, say, mini_batch_size=1 and gradient_accumulation_steps=16
python examples/scripts/ppo.py --ppo_config.log_with wandb --ppo_config.mini_batch_size 1 --ppo_config.gradient_accumulation_steps 16

Note: if you donโ€™t want to log with wandb remove log_with="wandb" in the scripts/notebooks. You can also replace it with your favourite experiment tracker thatโ€™s supported by accelerate.

Few notes on multi-GPU

To run in multi-GPU setup with DDP (distributed Data Parallel) change the device_map value to device_map={"": Accelerator().process_index} and make sure to run your script with accelerate launch yourscript.py. If you want to apply naive pipeline parallelism you can use device_map="auto".

Benchmarks

Below are some benchmark results for examples/scripts/ppo.py. To reproduce locally, please check out the --command arguments below.

Copied

With and without gradient accumulation

Copied

Comparing different models (gpt2, gpt2-xl, falcon, llama2)

Copied

With and without PEFT

Copied

Last updated