You are not logged in.

#1 2025-10-23 03:00:53

Orbital_sFear
Member
Registered: 2014-10-13
Posts: 50

ROCm / Ollama AMD stopped working with recent package update

With the recent ROCm/Ollama updates, GPU offloading stopped working.

These older versions work, the new ones don't:
warning: ollama: ignoring package upgrade (0.12.3-1 => 0.12.6-1)
warning: ollama-rocm: ignoring package upgrade (0.12.3-1 => 0.12.6-1)
warning: rocm-cmake: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-core: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-device-libs: ignoring package upgrade (2:6.4.0-1 => 2:6.4.4-2)
warning: rocm-hip-libraries: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-hip-runtime: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-hip-sdk: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-language-runtime: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-llvm: ignoring package upgrade (2:6.4.0-1 => 2:6.4.4-2)
warning: rocm-opencl-runtime: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-opencl-sdk: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocm-smi-lib: ignoring package upgrade (6.4.3-1 => 6.4.4-1)
warning: rocminfo: ignoring package upgrade (6.4.3-1 => 6.4.4-1)

Hardware:

AMD 7900XT

I noticed in the logs with the newest version, it doesn't see the total RAM:
time=2025-10-22T19:31:51.543-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
vs Ollama 0.12.3
time=2025-10-22T19:31:51.543-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB"

Even though the older version of ollama sees 20GB, it doesn't offload GPU layers unless I hold back all the packages listed above. I'm guessing this means the problem is in ROCm and not ollama?

Here are logs of the new version that doesn't work
https://pastebin.com/uLmsJEUx

Here is a working log:
https://pastebin.com/rSKXND5X

Since Rocm7 is around the corner, I'm just going to hold back my updates on ROCM and ollama for now.

Last edited by Orbital_sFear (2025-10-23 03:02:00)

Offline

#2 2025-10-23 05:28:38

Orbital_sFear
Member
Registered: 2014-10-13
Posts: 50

Re: ROCm / Ollama AMD stopped working with recent package update

I just tried out the new packages on my Framework AMD 780m, full seg fault. I rolled back to the versions above (2025-10-1 rollback machine), everything worked agian.

Offline

#3 2025-10-23 10:03:10

Lone_Wolf
Administrator
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 14,439

Re: ROCm / Ollama AMD stopped working with recent package update

time=2025-10-22T19:31:51.543-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB"

time=2025-10-22T19:58:58.294-07:00 level=INFO source=routes.go:1569 msg="entering low vram mode" "total vram"="20.0 GiB" threshold="20.0 GiB"

Both logs show the same amount.

What kernel are you running (uname -a if unsure) ?
If a 6.17.x kernel, does switching to linux-lts (currently 6.12.x ) make a difference ?


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.

clean chroot building not flexible enough ?
Try clean chroot manager by graysky

Offline

#4 2025-10-27 06:51:37

Orbital_sFear
Member
Registered: 2014-10-13
Posts: 50

Re: ROCm / Ollama AMD stopped working with recent package update

Linux Lucky 6.17.5-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 23 Oct 2025 18:49:03 +0000 x86_64 GNU/Linux

I'm sure switching to the LTS kernel is safe, but its beyond what I'm willing to try on my work machines. I don't think I need a resolution. As stated, ROCm 7 is close so I'll just wait for that. I was hoping to post a log of the issue and a temporary workaround for anyone else that comes across similar troubles.

Offline

#5 2025-10-27 12:33:36

Succulent of your garden
Member
From: Majestic kingdom of pot plants
Registered: 2024-02-29
Posts: 1,057

Re: ROCm / Ollama AMD stopped working with recent package update

It's a kernel issue, I can confirm that in my up to date system with amd card ollama works, I can do inference in LLMs even with graphical applications. Switch to LTS if you need it.

Last edited by Succulent of your garden (2025-10-27 12:33:59)


str( @soyg ) == str( @potplant ) btw!

Offline

#6 2025-11-26 00:26:47

Orbital_sFear
Member
Registered: 2014-10-13
Posts: 50

Re: ROCm / Ollama AMD stopped working with recent package update

Okay, the day came, Rocm7 Dropped.

No GPU detected after updating, I'm working through the issue but the first thing I found that appears to be an issue is:

ldd /usr/lib/ollama/libggml-hip.so | grep -i "not found"

libggml-base.so => not found          (exists in /usr/lib/ollama/)
libhipblas.so.2 => not found
librocblas.so.4 => not found
libamdhip64.so.6 => not found         (you have .so.7)

Trying different fixes now

Offline

#7 2025-11-26 00:32:48

Orbital_sFear
Member
Registered: 2014-10-13
Posts: 50

Re: ROCm / Ollama AMD stopped working with recent package update

Complete hack but I wanted to see what would happen with the libs being connected:
# For libamdhip64
sudo ln -s /opt/rocm/lib/libamdhip64.so.7 /opt/rocm/lib/libamdhip64.so.6

# Check what hipblas/rocblas versions exist and create symlinks
sudo ln -s /opt/rocm/lib/libhipblas.so /opt/rocm/lib/libhipblas.so.2
sudo ln -s /opt/rocm/lib/librocblas.so /opt/rocm/lib/librocblas.so.4

# For libggml-base
sudo ln -s /usr/lib/ollama/libggml-base.so /usr/lib/libggml-base.so

# SystemD

[Service]
Environment="LD_LIBRARY_PATH=/usr/lib/ollama:/opt/rocm/lib:$LD_LIBRARY_PATH"
Environment="ROCM_PATH=/opt/rocm"

This DOES make the GPU visible, but now when I load a model sad

Nov 25 16:28:56 GT40 ollama[53569]: ROCm error: CUBLAS_STATUS_NOT_SUPPORTED
Nov 25 16:28:56 GT40 ollama[53569]:   current device: 0, in function ggml_cuda_op_mul_mat_cublas at /build/ollama/src/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1406
Nov 25 16:28:56 GT40 ollama[53569]:   hipblasGemmEx(ctx.cublas_handle(id), HIPBLAS_OP_T, HIPBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha_f32, src0_ptr, HIPBLAS_R_16B, ne00, src1_ptr, HIPBLAS_R_16B, ne10, &beta_f32, dst_bf16.get(), HIPBLAS_R_16B, ldc, HIPBLAS_R_32F, HIPBLAS_GEMM_DEFAULT)
Nov 25 16:28:56 GT40 ollama[53569]: /build/ollama/src/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: ROCm error

Nov 25 16:28:56 GT40 ollama[53569]: rbp    0x7fca0f7fbf40
Nov 25 16:28:56 GT40 ollama[53569]: rsp    0x7fca0f7fbf00
Nov 25 16:28:56 GT40 ollama[53569]: r8     0x0
Nov 25 16:28:56 GT40 ollama[53569]: r9     0x0
Nov 25 16:28:56 GT40 ollama[53569]: r10    0x0
Nov 25 16:28:56 GT40 ollama[53569]: r11    0x246
Nov 25 16:28:56 GT40 ollama[53569]: r12    0x7fc9c1e73a56
Nov 25 16:28:56 GT40 ollama[53569]: r13    0x58
Nov 25 16:28:56 GT40 ollama[53569]: r14    0x6
Nov 25 16:28:56 GT40 ollama[53569]: r15    0x7fc9c1e4bbac
Nov 25 16:28:56 GT40 ollama[53569]: rip    0x7fca67c9890c
Nov 25 16:28:56 GT40 ollama[53569]: rflags 0x246
Nov 25 16:28:56 GT40 ollama[53569]: cs     0x33
Nov 25 16:28:56 GT40 ollama[53569]: fs     0x0
Nov 25 16:28:56 GT40 ollama[53569]: gs     0x0

Offline

#8 2025-11-26 04:38:10

jianglai
Member
Registered: 2013-02-26
Posts: 2

Re: ROCm / Ollama AMD stopped working with recent package update

It looks like upstream doesn't support 7.x yet?

https://github.com/ollama/ollama/issues/12734

Offline

#9 2025-11-26 05:04:16

Orbital_sFear
Member
Registered: 2014-10-13
Posts: 50

Re: ROCm / Ollama AMD stopped working with recent package update

jianglai wrote:

It looks like upstream doesn't support 7.x yet?

https://github.com/ollama/ollama/issues/12734

I think you're correct. There are talks about patches that are working for people on Github, but I don't see those changes in master yet.

For now I'm going to wave the white flag, install ollama-vulkan and pass OLLAMA_VULKAN=1. So far vulkan seems to be stable and roughly the same speed as rocm. I'll keep testing new ollama versions with rocm as they come out. Thanks smile

Offline

#10 2025-11-26 15:47:43

Social_HKr
Member
Registered: 2025-11-26
Posts: 2

Re: ROCm / Ollama AMD stopped working with recent package update

Workaround which works for me:

1. Make sure you have ROCm installed (not needed if you were running Ollama previously):

pacman -Syu --noconfirm && pacman -S --noconfirm rocm-core

2. Install the latest Ollama binaries (redirects to Github):

curl -fsSL "https://ollama.com/download/ollama-linux-amd64.tgz" | tar -xzf - -C "/usr"
curl -fsSL "https://ollama.com/download/ollama-linux-amd64-rocm.tgz" | tar -xzf - -C "/usr"

3. When the Arch package is fixed, remove the binaries and install the 'ollama-rocm' package:

rm -rf /usr/bin/ollama /usr/lib/ollama
pacman -S --noconfirm ollama-rocm

Offline

#11 2025-11-26 15:53:07

gromit
Administrator
From: Germany
Registered: 2024-02-10
Posts: 1,368
Website

Re: ROCm / Ollama AMD stopped working with recent package update

Which version of ollama-rocm are you using? AFAIU the version 0.13.0-2 should work just fine

Offline

#12 2025-11-26 16:25:27

Social_HKr
Member
Registered: 2025-11-26
Posts: 2

Re: ROCm / Ollama AMD stopped working with recent package update

Thank you, indeed it works with the 0.13.0-2.

$ pacman -Qi ollama-rocm | grep Version
Version         : 0.13.0-2
$ ollama serve
(...)
time=2025-11-26T16:20:09.385Z level=INFO source=types.go:42 msg="inference compute" id=GPU-703a94663dfbee97 filter_id="" library=ROCm compute=gfx1100 name=ROCm0 description="AMD Radeon RX 7900 XTX" libdirs=ollama driver=70125.45 pci_id=0000:03:00.0 type=discrete total="24.0 GiB" available="23.2 GiB"

Offline

Board footer

Powered by FluxBB