ONNX
Last updated
Last updated
🌍 provides a Stable Diffusion pipeline compatible with ONNX Runtime.
Install 🌍 Optimum with the following command for ONNX Runtime support:
Copied
To load an ONNX model and run inference with ONNX Runtime, you need to replace with ORTStableDiffusionPipeline
. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set export=True
.
Copied
Copied
Then perform inference:
Copied
Notice that we didn’t have to specify export=True
above.
text-to-image
ORTStableDiffusionPipeline
image-to-image
ORTStableDiffusionImg2ImgPipeline
inpaint
ORTStableDiffusionInpaintPipeline
Copied
Copied
text-to-image
ORTStableDiffusionXLPipeline
image-to-image
ORTStableDiffusionXLImg2ImgPipeline
Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.
If you want to export the pipeline in the ONNX format offline and later use it for inference, you can use the command:
You can find more examples in .
To export your model to ONNX, you can use the as follows :
Here is an example of how you can load a SDXL ONNX model from and run inference with ONNX Runtime :