BOINC AI Optimum Habana
π€ Optimum Habana
π€ Optimum Habana is the interface between the π€ Transformers and π€ Diffusers libraries and Habanaβs Gaudi processor (HPU). It provides a set of tools that enable easy model loading, training and inference on single- and multi-HPU settings for various downstream tasks as shown in the table below.
HPUs offer fast model training and inference as well as a great price-performance ratio. Check out this blog post about BERT pre-training and this article benchmarking Habana Gaudi2 versus Nvidia A100 GPUs for concrete examples. If you are not familiar with HPUs, we recommend you take a look at our conceptual guide.
The following model architectures, tasks and device distributions have been validated for π€ Optimum Habana:
In the tables below, β means single-card, multi-card and DeepSpeed have all been validated.
Transformers
Diffusers
Other models and tasks supported by the π Transformers and π Diffusers library may also work. You can refer to this section for using them with π Optimum Habana. Besides, this page explains how to modify any example from the π Transformers library to make it work with π Optimum Habana.
Last updated