Notebooks
🌍 Optimum notebooks
You can find here a list of the notebooks associated with each accelerator in 🌍 Optimum.
Optimum Habana
Show how to use DeepSpeed to pre-train/fine-tune the 1.6B-parameter GPT2-XL for causal language modeling on Habana Gaudi.
Optimum Intel
OpenVINO
Explains how to export your model to OpenVINO and run inference with OpenVINO Runtime on various tasks
Show how to apply post-training quantization on a question answering model using NNCF and to accelerate inference with OpenVINO
Show how to load and compare outputs from two Stable Diffusion models with different precision
Neural Compressor
Show how to apply quantization while training your model using Intel Neural Compressor for any GLUE task.
Optimum ONNX Runtime
Show how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task.
Show how to DistilBERT model on GLUE tasks using ONNX Runtime.
Show how to fine-tune a T5 model on the BBC news corpus.
Show how to fine-tune a DeBERTa model on the squad.
Last updated