Distributed inference with multiple GPUs
Last updated
Last updated
On distributed setups, you can run inference across multiple GPUs with π or , which is useful for generating with multiple prompts in parallel.
This guide will show you how to use π Accelerate and PyTorch Distributed for distributed inference.
π is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code.
To begin, create a Python file and initialize an to create a distributed environment; your setup is automatically detected so you donβt need to explicitly define the rank
or world_size
. Move the to distributed_state.device
to assign a GPU to each process.
Now use the utility as a context manager to automatically distribute the prompts between the number of processes.
Copied
Use the --num_processes
argument to specify the number of GPUs to use, and call accelerate launch
to run the script:
Copied
Copied
Copied
Copied
Once youβve completed the inference script, use the --nproc_per_node
argument to specify the number of GPUs to use and call torchrun
to run the script:
Copied
To learn more, take a look at the π guide.
PyTorch supports which enables data parallelism.
To start, create a Python file and import torch.distributed
and torch.multiprocessing
to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a :
Youβll want to create a function to run inference; handles creating a distributed environment with the type of backend to use, the rank
of the current process, and the world_size
or the number of processes participating. If youβre running inference in parallel over 2 GPUs, then the world_size
is 2.
Move the to rank
and use get_rank
to assign a GPU to each process, where each process handles a different prompt:
To run the distributed inference, call to run the run_inference
function on the number of GPUs defined in world_size
: