Troubleshooting
Debugging
Multi-GPU Network Issues Debug
When training or inferencing with DistributedDataParallel
and multiple GPU, if you run into issue of inter-communication between processes and/or nodes, you can use the following script to diagnose network issues.
Copied
For example to test how 2 GPUs interact do:
Copied
If both processes can talk to each and allocate GPU memory each will print an OK status.
For more GPUs or nodes adjust the arguments in the script.
You will find a lot more details inside the diagnostics script and even a recipe to how you could run it in a SLURM environment.
An additional level of debug is to add NCCL_DEBUG=INFO
environment variable as follows:
Copied
This will dump a lot of NCCL-related debug information, which you can then search online if you find that some problems are reported. Or if you’re not sure how to interpret the output you can share the log file in an Issue.
Underflow and Overflow Detection
This feature is currently available for PyTorch-only.
For multi-GPU training it requires DDP (torch.distributed.launch
).
This feature can be used with any nn.Module
-based model.
If you start getting loss=NaN
or the model inhibits some other abnormal behavior due to inf
or nan
in activations or weights one needs to discover where the first underflow or overflow happens and what led to it. Luckily you can accomplish that easily by activating a special module that will do the detection automatically.
If you’re using Trainer, you just need to add:
Copied
to the normal command line arguments, or pass debug="underflow_overflow"
when creating the TrainingArguments object.
If you’re using your own training loop or another Trainer you can accomplish the same with:
Copied
DebugUnderflowOverflow inserts hooks into the model that immediately after each forward call will test input and output variables and also the corresponding module’s weights. As soon as inf
or nan
is detected in at least one element of the activations or weights, the program will assert and print a report like this (this was caught with google/mt5-small
under fp16 mixed precision):
Copied
The example output has been trimmed in the middle for brevity.
The second column shows the value of the absolute largest element, so if you have a closer look at the last few frames, the inputs and outputs were in the range of 1e4
. So when this training was done under fp16 mixed precision the very last step overflowed (since under fp16
the largest number before inf
is 64e3
). To avoid overflows under fp16
the activations must remain way below 1e4
, because 1e4 * 1e4 = 1e8
so any matrix multiplication with large activations is going to lead to a numerical overflow condition.
At the very start of the trace you can discover at which batch number the problem occurred (here Detected inf/nan during batch_number=0
means the problem occurred on the first batch).
Each reported frame starts by declaring the fully qualified entry for the corresponding module this frame is reporting for. If we look just at this frame:
Copied
Here, encoder.block.2.layer.1.layer_norm
indicates that it was a layer norm for the first layer, of the second block of the encoder. And the specific calls of the forward
is T5LayerNorm
.
Let’s look at the last few frames of that report:
Copied
The last frame reports for Dropout.forward
function with the first entry for the only input and the second for the only output. You can see that it was called from an attribute dropout
inside DenseReluDense
class. We can see that it happened during the first layer, of the 2nd block, during the very first batch. Finally, the absolute largest input elements was 6.27e+04
and same for the output was inf
.
You can see here, that T5DenseGatedGeluDense.forward
resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16’s top limit of 64K. In the next frame we have Dropout
which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overflow (inf
).
As you can see it’s the previous frames that we need to look into when the numbers start going into very large for fp16 numbers.
Let’s match the report to the code from models/t5/modeling_t5.py
:
Copied
Now it’s easy to see the dropout
call, and all the previous calls as well.
Since the detection is happening in a forward hook, these reports are printed immediately after each forward
returns.
Going back to the full report, to act on it and to fix the problem, we need to go a few frames up where the numbers started to go up and most likely switch to the fp32
mode here, so that the numbers don’t overflow when multiplied or summed up. Of course, there might be other solutions. For example, we could turn off amp
temporarily if it’s enabled, after moving the original forward
into a helper wrapper, like so:
Copied
Since the automatic detector only reports on inputs and outputs of full frames, once you know where to look, you may want to analyse the intermediary stages of any specific forward
function as well. In such a case you can use the detect_overflow
helper function to inject the detector where you want it, for example:
Copied
You can see that we added 2 of these and now we track if inf
or nan
for forwarded_states
was detected somewhere in between.
Actually, the detector already reports these because each of the calls in the example above is a nn.Module
, but let’s say if you had some local direct calculations this is how you’d do that.
Additionally, if you’re instantiating the debugger in your own code, you can adjust the number of frames printed from its default, e.g.:
Copied
Specific batch absolute min and max value tracing
The same debugging class can be used for per-batch tracing with the underflow/overflow detection feature turned off.
Let’s say you want to watch the absolute min and max values for all the ingredients of each forward
call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as:
Copied
And now full batches 1 and 3 will be traced using the same format as the underflow/overflow detector does.
Batches are 0-indexed.
This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Here is a sample truncated output for such configuration:
Copied
Here you will get a huge number of frames dumped - as many as there were forward calls in your model, so it may or may not what you want, but sometimes it can be easier to use for debugging purposes than a normal debugger. For example, if a problem starts happening at batch number 150. So you can dump traces for batches 149 and 150 and compare where numbers started to diverge.
You can also specify the batch number after which to stop the training, with:
Copied
Last updated