How to use Apple Silicon M1 GPUs
Accelerated PyTorch Training on Mac
With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Appleโs Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new "mps"
device. This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. For more information please refer official documents Introducing Accelerated PyTorch Training on Mac and MPS BACKEND.
Benefits of Training and Inference using Apple Silicon Chips
Enables users to train larger networks or batch sizes locally
Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture. Therefore, improving end-to-end performance.
Reduces costs associated with cloud-based development or the need for additional local GPUs.
Pre-requisites: To install torch with mps support, please follow this nice medium article GPU-Acceleration Comes to PyTorch on M1 Macs.
How it works out of the box
It is enabled by default on MacOs machines with MPS enabled Apple Silicon GPUs. To disable it, pass --cpu
flag to accelerate launch
command or answer the corresponding question when answering the accelerate config
questionnaire.
You can directly run the following script to test it out on MPS enabled Apple Silicon machines:
Copied
A few caveats to be aware of
We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine. It has major fixes related to model correctness and performance improvements for transformer based models. Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details.
Distributed setups
gloo
andnccl
are not working withmps
device. This means that currently only single GPU ofmps
device type can be used.
Finally, please, remember that, ๐ Accelerate
only integrates MPS backend, therefore if you have any problems or questions with regards to MPS backend usage, please, file an issue with PyTorch GitHub.
Last updated