Debugging timeout errors
Debugging Distributed Operations
When running scripts in a distributed fashion, often functions such as Accelerator.gather() and Accelerator.reduce() (and others) are neccessary to grab tensors across devices and perform certain operations on them. However, if the tensors which are being grabbed are not the proper shapes then this will result in your code hanging forever. The only sign that exists of this truly happening is hitting a timeout exception from torch.distributed
, but this can get quite costly as usually the timeout is 10 minutes.
Accelerate now has a debug
mode which adds a neglible amount of time to each operation, but allows it to verify that the inputs you are bringing in can actually perform the operation you want without hitting this timeout problem!
Visualizing the problem
To have a tangible example of this issue, let’s take the following setup (on 2 GPUs):
Copied
We’ve created a single tensor on each device, with two radically different shapes. With this setup if we want to perform an operation such as utils.broadcast(), we would forever hit a timeout because torch.distributed
requires that these operations have the exact same shape across all processes for it to work.
If you run this yourself, you will find that broadcast_tensor
can be printed on the main process, but its results won’t quite be right, and then it will just hang never printing it on any of the other processes:
Copied
The solution
By enabling Accelerate’s operational debug mode, Accelerate will properly find and catch errors such as this and provide a very clear traceback immediatly:
Copied
This explains that the shapes across our devices were not the same, and that we should ensure that they match properly to be compatible. Typically this means that there is either an extra dimension, or certain dimensions are incompatible with the operation.
To enable this please do one of the following:
Enable it through the questionarre during accelerate config
(recommended)
From the CLI:
Copied
As an environmental variable (which avoids the need for accelerate launch
):
Copied
Manually changing the config.yaml
file:
Copied
Last updated