You are not logged in.

#1 2024-12-03 11:14:52

rogorido
Member
Registered: 2009-08-17
Posts: 113

[half-solved] Ollama-cuda error after upgrade (to ollama-cuda 0.4.5)

I see, they were some changes in the ollama packages in the last PKGBUILD versions... In any case with the version 0.3.12 I could use ollama-cuda without problems. The last version (ollama-cuda-0.4.5) gives an error:

```
gml_cuda_compute_forward: RMS_NORM failed
CUDA error: no kernel image is available for execution on the device
  current device: 0, in function ggml_cuda_compute_forward at llama/ggml-cuda.cu:2403
  err
llama/ggml-cuda.cu:132: CUDA error
```

I have a NVIDIA Geforce 940MX (architecture Maxwell). Maybe the problem is the big patch I see in the last PKGBUILD and the architecture is not supported. But if I compile llama-cpp (not ollama!) with CUDA I do not have any problems.

Any idea?

Last edited by rogorido (2024-12-04 21:24:53)

Offline

#2 2024-12-03 17:03:53

loqs
Member
Registered: 2014-03-06
Posts: 18,180

Re: [half-solved] Ollama-cuda error after upgrade (to ollama-cuda 0.4.5)

You can use the ALA to determine the first version with the issue.

Offline

#3 2024-12-04 21:24:37

rogorido
Member
Registered: 2009-08-17
Posts: 113

Re: [half-solved] Ollama-cuda error after upgrade (to ollama-cuda 0.4.5)

loqs wrote:

You can use the ALA to determine the first version with the issue.

Thanks. I did not know this utility. In any case, I think the problem is the upgrade does not support my card... I will tag the thread as "half-solved"...

Offline

Board footer

Powered by FluxBB