Set up distributed training with BOINC AI Accelerate
Distributed training with ๐ Accelerate
As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At BOINC AI, we created the๐ Accelerate library to help users easily train a ๐Transformers model on any type of distributed setup, whether it is multiple GPUโs on one machine or multiple GPUโs across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment.
Setup
Get started by installing ๐Accelerate:
Copied
Then import and create an Accelerator object. The Accelerator will automatically detect your type of distributed setup and initialize all the necessary components for training. You donโt need to explicitly place your model on a device.
Copied
Prepare to accelerate
The next step is to pass all the relevant training objects to the prepare method. This includes your training and evaluation DataLoaders, a model and an optimizer:
Copied
Backward
The last addition is to replace the typical loss.backward()
in your training loop with ๐ Accelerateโs backwardmethod:
Copied
As you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training!
Copied
Train
Once youโve added the relevant lines of code, launch your training in a script or a notebook like Colaboratory.
Train with a script
If you are running your training from a script, run the following command to create and save a configuration file:
Copied
Then launch your training with:
Copied
Train with a notebook
๐Accelerate can also run in a notebook if youโre planning on using Colaboratoryโs TPUs. Wrap all the code responsible for training in a function, and pass it to notebook_launcher:
Copied
For more information about ๐ Accelerate and its rich features, refer to the documentation.
Last updated