How to run Pytorch on Macbook pro (M1) GPU?
Asked Answered
P

3

28

I tried to train a model using PyTorch on my Macbook pro. It uses the new generation apple M1 CPU. However, PyTorch couldn't recognize my GPUs.

GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs

Does anyone know any solution?

I have updated all the libraries to the latest versions.

Phocomelia answered 17/8, 2021 at 15:50 Comment(2)
There is currently no way to do that.Wellbred
I've rolled back your edit that added a solution, as questions are not meant to be updated with answers. You should go ahead and post an answer instead. You can copy the text that you used directly from the source.Spires
C
31

PyTorch added support for M1 GPU as of 2022-05-18 in the Nightly version. Read more about it in their blog post.

Simply install nightly: conda install pytorch -c pytorch-nightly --force-reinstall

Update: It's available in the stable version:

  • Conda:conda install pytorch torchvision torchaudio -c pytorch
  • pip: pip3 install torch torchvision torchaudio

To use (source):

mps_device = torch.device("mps")

# Create a Tensor directly on the mps device
x = torch.ones(5, device=mps_device)
# Or
x = torch.ones(5, device="mps")

# Any operation happens on the GPU
y = x * 2

# Move your model to mps just like any other device
model = YourFavoriteNet()
model.to(mps_device)

# Now every call runs on the GPU
pred = model(x)
Calceiform answered 18/5, 2022 at 17:34 Comment(5)
If you don't want to use Conda for this, you can check out my boilerplate repo where I demonstrate installing it with PoetryInterstratify
As of now, the official docs lets you use conda install pytorch torchvision torchaudio -c pytorch in order to run on m1 macs as well .Nachison
what about tensorflow.js?Patella
What is the difference between Conda and Pip?Thwack
When pip installed at least, torch.ones(5, device="mps") raises UserWarning: MPS: nonzero op is supported natively starting from macOS 13.0. Falling back on CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/Indexing.mm:334.)Horselaugh
G
17

It looks like PyTorch support for the M1 GPU is in the works, but is not yet complete.

From @soumith on GitHub:

So, here's an update. We plan to get the M1 GPU supported. @albanD, @ezyang and a few core-devs have been looking into it. I can't confirm/deny the involvement of any other folks right now.

So, what we have so far is that we had a prototype that was just about okay. We took the wrong approach (more graph-matching-ish), and the user-experience wasn't great -- some operations were really fast, some were really slow, there wasn't a smooth experience overall. One had to guess-work which of their workflows would be fast.

So, we're completely re-writing it using a new approach, which I think is a lot closer to your good ole PyTorch, but it is going to take some time. I don't think we're going to hit a public alpha in the next ~4 months.

We will open up development of this backend as soon as we can.

That post: https://github.com/pytorch/pytorch/issues/47702#issuecomment-965625139

TL;DR: a public beta is at least 4 months out.

Gaudreau answered 15/11, 2021 at 4:59 Comment(3)
any update on this? 👀Quiles
Do we use torch.device("cpu") for now?Flabbergast
but can we install pytorch for cpu for Apple Silicon? I didn't find that alsoCoexist
S
9

For those who couldn't install using conda like me use pip as following:-

Requirement:

  • Any Macbook with apple silicon chip
  • macOS version 12.3+

Installation:

pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu

Update:

No need of nightly version. Pytorch version 1.12 now supports GPU acceleration in apple silicon. Simply install using following command:-

pip3 install torch torchvision torchaudio

You may follow other instructions for using pytorch in apple silicon and getting your benchmark.

Usage:

Make sure you use mps as your device as following:

device = torch.device('mps')

# Send you tensor to GPU
my_tensor = my_tensor.to(device)

Benchmarking (on M1 Max, 10-core CPU, 24-core GPU):

  1. Without using GPU
import torch
device = torch.device('cpu')
x = torch.rand((10000, 10000), dtype=torch.float32)
y = torch.rand((10000, 10000), dtype=torch.float32)
x = x.to(device)
y = y.to(device)
%%timeit
x * y
17.9 ms ± 390 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
  1. Using GPU (5x faster)
import torch
device = torch.device('mps')
x = torch.rand((10000, 10000), dtype=torch.float32)
y = torch.rand((10000, 10000), dtype=torch.float32)
x = x.to(device)
y = y.to(device)
%%timeit
x * y
3.43 ms ± 57.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Slick answered 27/5, 2022 at 6:31 Comment(1)
Using the same Mac and code, I found 'mps' to be slower than 'cpu'. This could be because the calculation is not large enough.Funny

© 2022 - 2024 — McMap. All rights reserved.