Google Cloud Trainium & Inferentia
Ctrlk
  • ๐ŸŒOptimum Neuron
  • ๐ŸŒInstallation
  • ๐ŸŒQuickstart
  • ๐ŸŒTUTORIALS
  • ๐ŸŒHOW-TO GUIDES
    • Overview
    • Set up AWS Trainium instance
    • Neuron model cache
    • Fine-tune Transformers with AWS Trainium
    • Export a model to Inferentia
    • Neuron models for inference
    • Inference pipelines with AWS Neuron
  • ๐ŸŒREFERENCE
Powered by GitBook
On this page

๐ŸŒHOW-TO GUIDES

OverviewSet up AWS Trainium instanceNeuron model cacheFine-tune Transformers with AWS TrainiumExport a model to InferentiaNeuron models for inferenceInference pipelines with AWS Neuron
PreviousFine-tune BERT for Text Classification on AWS TrainiumNextOverview