You are not logged in.
Hello!
I ordered a nvidia RTX3080 when it was released but it doesn't look like I will get it soon so I cancelled my order after seeing the new AMD GPU release information.
I know that having a nvidia gpu is "better" for deep learning because most/all libs support cuda/cudnn well. After seeing the AMD release, I looked for information and found ROCm that allows to use PyTorch, TensorFlow and Caffe with AMD GPU.
Would anybody here have feedbacks regarding all the ROCm tooling on ArchLinux? I could find some AUR packages but that can sometimes be a little bit scary when I am used to just let Arch handles the packages and the compatibility between TF, PyTorch and the GPU.
I am playing with computer vision and nlp most of the time.
I would mainly like to know how easy it is to train models on AMD GPU with Jupyter Notebooks, PyTorch (via FastAI), TF/Keras and so on.
Thanks. ![]()
Offline
It is too early to talk about support for Big Navi. See the discussion on Phoronix:
https://www.phoronix.com/scan.php?page= … 9-Released
I have had a Navi 10 for a year and, for now, no support for ROCm.
Either you can wait a few more months, or you'd better focus on Nvidia. (with proprietary drivers...)
Offline
Okay okay, thanks for the information. I guess I'll stick to Nvidia and wait for the RTX3080. I'll talk the guy to keep my order.
AMD have great GPU, that's just too bad that the support for deep learning frameworks is just not there.
Have a nice day.
Offline