# BOINC AI Accelerate

🌍 Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.

Copied

```
+ from accelerate import Accelerator
+ accelerator = Accelerator()

+ model, optimizer, training_dataloader, scheduler = accelerator.prepare(
+     model, optimizer, training_dataloader, scheduler
+ )

  for batch in training_dataloader:
      optimizer.zero_grad()
      inputs, targets = batch
      inputs = inputs.to(device)
      targets = targets.to(device)
      outputs = model(inputs)
      loss = loss_function(outputs, targets)
+     accelerator.backward(loss)
      optimizer.step()
      scheduler.step()
```

Built on `torch_xla` and `torch.distributed`, 🌍 Accelerate takes care of the heavy lifting, so you don’t have to write any custom code to adapt to these platforms. Convert existing codebases to utilize [DeepSpeed](https://huggingface.co/docs/accelerate/usage_guides/deepspeed), perform [fully sharded data parallelism](https://huggingface.co/docs/accelerate/usage_guides/fsdp), and have automatic support for mixed-precision training!

To get a better idea of this process, make sure to check out the [Tutorials](https://huggingface.co/docs/accelerate/basic_tutorials/overview)!

This code can then be launched on any system through Accelerate’s CLI interface:

Copied

```
accelerate launch {my_script.py}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://boinc-ai.gitbook.io/accelerate/getting-started/boinc-ai-accelerate.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
