OpenVINO
How to use OpenVINO for inference
🌍 Optimum provides Stable Diffusion pipelines compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors (see the full list of supported devices).
Installation
Install 🌍 Optimum Intel with the following command:
Copied
The --upgrade-strategy eager
option is needed to ensure optimum-intel
is upgraded to its latest version.
Stable Diffusion
Inference
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace StableDiffusionPipeline
with OVStableDiffusionPipeline
. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set export=True
.
Copied
To further speed up inference, the model can be statically reshaped :
Copied
In case you want to change any parameters such as the outputs height or width, you’ll need to statically reshape your model once again.
Supported tasks
Task | Loading Class |
---|---|
|
|
|
|
|
|
You can find more examples in the optimum documentation.
Stable Diffusion XL
Inference
Here is an example of how you can load a SDXL OpenVINO model from stabilityai/stable-diffusion-xl-base-1.0 and run inference with OpenVINO Runtime :
Copied
To further speed up inference, the model can be statically reshaped as showed above. You can find more examples in the optimum documentation.
Supported tasks
Task | Loading Class |
---|---|
|
|
|
|
Last updated