BOINC AI Accelerate

๐ŸŒ Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.

Copied

+ from accelerate import Accelerator
+ accelerator = Accelerator()

+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+     model, optimizer, training_dataloader, scheduler
+ )

  for batch in training_dataloader:
      optimizer.zero_grad()
      inputs, targets = batch
      inputs = inputs.to(device)
      targets = targets.to(device)
      outputs = model(inputs)
      loss = loss_function(outputs, targets)
+     accelerator.backward(loss)
      optimizer.step()
      scheduler.step()

Built on torch_xla and torch.distributed, ๐ŸŒ Accelerate takes care of the heavy lifting, so you donโ€™t have to write any custom code to adapt to these platforms. Convert existing codebases to utilize DeepSpeedarrow-up-right, perform fully sharded data parallelismarrow-up-right, and have automatic support for mixed-precision training!

To get a better idea of this process, make sure to check out the Tutorialsarrow-up-right!

This code can then be launched on any system through Accelerateโ€™s CLI interface:

Copied

Last updated