You are not logged in.
With the recent ROCm/Ollama updates, GPU offloading stopped working.
These older versions work, the new ones don't:
warning: ollama: ignoring package upgrade (0.12.3-1 => 0.12.6-1)
warning: ollama-rocm: ignoring package upgrade (0.12.3-1 => 0.12.6-1)
warning: rocm-cmake: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-core: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-device-libs: ignoring package upgrade (2:6.4.0-1 => 2:6.4.4-2)
warning: rocm-hip-libraries: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-hip-runtime: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-hip-sdk: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-language-runtime: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-llvm: ignoring package upgrade (2:6.4.0-1 => 2:6.4.4-2)
warning: rocm-opencl-runtime: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-opencl-sdk: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-smi-lib: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocminfo: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
Hardware:
AMD 7900XT
I noticed in the logs with the newest version, it doesn't see the total RAM:
time=2025-10-22T19:31:51.543-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
vs Ollama 0.12.3
time=2025-10-22T19:31:51.543-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB"
Even though the older version of ollama sees 20GB, it doesn't offload GPU layers unless I hold back all the packages listed above. I'm guessing this means the problem is in ROCm and not ollama?
Here are logs of the new version that doesn't work
https://pastebin.com/uLmsJEUx
Here is a working log:
https://pastebin.com/rSKXND5X
Since Rocm7 is around the corner, I'm just going to hold back my updates on ROCM and ollama for now.
Last edited by Orbital_sFear (2025-10-23 03:02:00)
Offline
I just tried out the new packages on my Framework AMD 780m, full seg fault. I rolled back to the versions above (2025-10-1 rollback machine), everything worked agian.
Offline
time=2025-10-22T19:31:51.543-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB"
time=2025-10-22T19:58:58.294-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB"Both logs show the same amount.
What kernel are you running (uname -a if unsure) ?
If a 6.17.x kernel, does switching to linux-lts (currently 6.12.x ) make a difference ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
 Try clean chroot manager by graysky
Offline
Linux Lucky 6.17.5-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 23 Oct 2025 18:49:03 +0000 x86_64 GNU/Linux
I'm sure switching to the LTS kernel is safe, but its beyond what I'm willing to try on my work machines. I don't think I need a resolution. As stated, ROCm 7 is close so I'll just wait for that. I was hoping to post a log of the issue and a temporary workaround for anyone else that comes across similar troubles.
Offline
It's a kernel issue, I can confirm that in my up to date system with amd card ollama works, I can do inference in LLMs even with graphical applications. Switch to LTS if you need it.
Last edited by Succulent of your garden (2025-10-27 12:33:59)
str( @soyg ) == str( @potplant ) btw!
Offline