Distributed Training
Last updated
Last updated
As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude.
All the and the script work out of the box with distributed training. There are two ways of launching them:
Using the script:
Copied
where --argX
is an argument of the script to run in a distributed way. Examples are given for question answering and text classification .
Using the directly in code:
Copied
To go further, we invite you to read our guides about:
You can set the training argument --distribution_strategy fast_ddp
for simpler and usually faster distributed training management. More information .
to train bigger models
to speed up even more your distributed runs