You are not logged in.
I currently own an AMD Ryzen 5 5600X 6-Core Processor and an AMD Radeon RX 6800 XT, running Arch Linux without any issues. While learning ML, I realized that CUDA is mostly the de-facto standard. It is possible to make ROCm work, but its compatibility with compilers, backends, libraries, and (especially) its documentation is not at the same level as CUDA.
I have the opportunity to get a relatively cheap GeForce RTX 4060 Ti 16GB, so I was considering configuring two dedicated GPUs: keeping my RX 6800 XT for rendering and using the NVIDIA GPU solely for ML workloads.
I wanted to know if anyone has experience with a similar setup or if you could guide me on whether this is possible. Most of the documentation I found refers to NVIDIA SLI or AMD CrossFire, which is not applicable in this context, or the hybrid graphics concept, which, if I am not mistaken, is mainly for rendering setups involving an integrated/discrete GPU combination.
Any insights or advice would be greatly appreciated!
Offline
In case you install nvidia-utils (thus providing OpenGL & vulkan) for the 4060, there may be applications that select that card automatically (steam ?) .
However nvidia-utils is only optional for cuda , so it should be possible to leave that out.
Ensure that the outputs of the nvidia card (including usb ports) are NOT connected to any outside devices to avoid confusion .
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline