Quickstart
Quickstart
π Optimum Neuron was designed with one goal in mind: to make training and inference straightforward for any π Transformers user while leveraging the complete power of AWS Accelerators.
Training
There are two main classes one needs to know:
NeuronArgumentParser: inherits the original BaArgumentParser in Transformers with additional checks on the argument values to make sure that they will work well with AWS Trainium instances.
NeuronTrainer: the trainer class that takes care of compiling and distributing the model to run on Trainium Chips, and performing training and evaluation.
The NeuronTrainer is very similar to the π Transformers Trainer, and adapting a script using the Trainer to make it work with Trainium will mostly consist in simply swapping the Trainer
class for the NeuronTrainer
one. Thatβs how most of the example scripts were adapted from their original counterparts.
modifications:
Copied
All Trainium instances come at least with 2 Neuron Cores. To leverage those we need to launch the training whith torchrun
. Below you see and example of how to launch a training script on a trn1.2xlarge
instance using a bert-base-uncased
model.
Copied
Inference
You can compile and export your π Transformers models to a serialized format before inference on Neuron devices:
Copied
The command above will export distilbert-base-uncased-finetuned-sst-2-english
with static shapes: batch_size=1
and sequence_length=32
, and cast all matmul
operations from FP32 to BF16. Check out the exporter guide for more compilation options.
Then you can run the exported Neuron model on Neuron devices with NeuronModelForXXX
classes which are similar to AutoModelForXXX
classes in π Transformers:
Copied
Last updated