You are not logged in.
I see, they were some changes in the ollama packages in the last PKGBUILD versions... In any case with the version 0.3.12 I could use ollama-cuda without problems. The last version (ollama-cuda-0.4.5) gives an error:
```
gml_cuda_compute_forward: RMS_NORM failed
CUDA error: no kernel image is available for execution on the device
current device: 0, in function ggml_cuda_compute_forward at llama/ggml-cuda.cu:2403
err
llama/ggml-cuda.cu:132: CUDA error
```
I have a NVIDIA Geforce 940MX (architecture Maxwell). Maybe the problem is the big patch I see in the last PKGBUILD and the architecture is not supported. But if I compile llama-cpp (not ollama!) with CUDA I do not have any problems.
Any idea?
Last edited by rogorido (2024-12-04 21:24:53)
Offline
You can use the ALA to determine the first version with the issue.
Offline
You can use the ALA to determine the first version with the issue.
Thanks. I did not know this utility. In any case, I think the problem is the upgrade does not support my card... I will tag the thread as "half-solved"...
Offline