Quickstart
Last updated
Last updated
🌍 Optimum Habana was designed with one goal in mind: making training and evaluation straightforward for any 🌍 Transformers user while leveraging the complete power of Gaudi processors. There are two main classes one needs to know:
: the trainer class that takes care of compiling (lazy or eager mode) and distributing the model to run on HPUs, and of performing training and evaluation.
: the class that enables to configure Habana Mixed Precision and to decide whether optimized operators and optimizers should be used or not.
The is very similar to the 🌍, and adapting a script using the Trainer to make it work with Gaudi will mostly consist in simply swapping the Trainer
class for the GaudiTrainer
one. That is how most of the were adapted from their .
Copied
🌍 Optimum Habana also features HPU-optimized support for the 🌍 Diffusers library. Thus, you can easily deploy Stable Diffusion on Gaudi for performing text-to-image generation.
Here is how to use it and the differences with the 🌍 Diffusers library:
Copied
Here are examples for various modalities and tasks that can be used out of the box:
Text
Images
Audio
Text and images
where gaudi_config_name
is the name of a model from the (Gaudi configurations are stored in model repositories) or a path to a local Gaudi configuration file (you can see how to write your own).
,
,
,
,
,
,
,
,
.