OpenVINO
Last updated
Last updated
🌍 provides Stable Diffusion pipelines compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ( the full list of supported devices).
Install 🌍 Optimum Intel with the following command:
Copied
The --upgrade-strategy eager
option is needed to ensure is upgraded to its latest version.
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace StableDiffusionPipeline
with OVStableDiffusionPipeline
. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set export=True
.
Copied
To further speed up inference, the model can be statically reshaped :
Copied
In case you want to change any parameters such as the outputs height or width, you’ll need to statically reshape your model once again.
text-to-image
OVStableDiffusionPipeline
image-to-image
OVStableDiffusionImg2ImgPipeline
inpaint
OVStableDiffusionInpaintPipeline
Copied
text-to-image
OVStableDiffusionXLPipeline
image-to-image
OVStableDiffusionXLImg2ImgPipeline
You can find more examples in the optimum .
Here is an example of how you can load a SDXL OpenVINO model from and run inference with OpenVINO Runtime :
To further speed up inference, the model can be statically reshaped as showed above. You can find more examples in the optimum .