You are not logged in.
When rendering a scene with Cycles, Blender uses around 90% of my CPU. It doesn't matter what Cycles Rendering Device I choose (CUDA or OptiX, with GPU or CPU or both), it always takes around 7 seconds to render each frame from the base scene with the cube you get when you go to File -> New -> General.
When I choose to render with Eevee, the GPU usage bounces between 15% and 30%, and each frame takes fractions of a second to render. The CPU tops at around 20%.
With Cycles, the GPU usage is 0%, each frame takes 7 seconds to render, and the CPU goes to around 90%.
I tried with three versions of Blender (all downloaded today): the one in the official repos, the one from Steam and the compressed file from the official website. Same results with all.
blender 17:3.4.1-14
nvidia-dkms 525.89.02-2
cuda 11.8.0-1
CPU: AMD Ryzen 5 5600X (12) @ 3.700GHz
GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate
RAM: 64190MiB
I thought maybe that's the expected rendering time when using Cycles due to my specs, but doesn't seem likely since rendering with GPU and CPU takes the exact same amount of time.
Offline
Activate 'Cycles' in scene, set 'Viewport Shading' to 'Rendered' and check upper left screen area for rendering engine status/errors. Also check other Blender logs if possible.
Maybe you need to recompile the cuda kernel with nvcc.
sys2064
Offline
Have you actually enabled it? Edit/Preferences, System tab on the left then either CUDA or Optix at the top (depending on whether or not you have an RTX card), and make sure that your GPU is selected.
Ryzen 5900X 12 core/24 thread - RTX 3090 FE 24 Gb, Asus B550-F Gaming MB, 128Gb Corsair DDR4, Cooler Master N300 chassis, 5 HD (2 NvME PCI, 4SSD) + 1 x optical.
Linux user #545703
/ is the root of all problems.
Offline
Have you actually enabled it? Edit/Preferences, System tab on the left then either CUDA or Optix at the top (depending on whether or not you have an RTX card), and make sure that your GPU is selected.
Yes, as I mention in the original post, changing those settings have zero effect.
My card is a RTX 3060ti, and it shows up in the System settings in Blender, allowing me to choose between OptiX and CUDA. But it doesn't matter which one I choose, or even if I choose none and let the CPU do its work... Always 90% CPU load and 7 seconds per frame.
EDIT:
Activate 'Cycles' in scene, set 'Viewport Shading' to 'Rendered' and check upper left screen area for rendering engine status/errors. Also check other Blender logs if possible.
Maybe you need to recompile the cuda kernel with nvcc.
Just tried that, with the base cube scene, and no errors. When rotating the viewport camera, it takes around 3 seconds to process all 1024 samples.
EDIT 2:
Just tried launching Blender from the command line with --debug-cycles argument. I'm getting this:
I0219 11:42:00.109244 4683 device_impl.cpp:530] Mapped host memory limit set to 63,014,125,568 bytes. (58.69G)
I0219 11:42:00.147248 4683 device_impl.cpp:58] Using AVX2 CPU kernels.
I0219 11:42:00.147639 4683 sync.cpp:296] Total time spent synchronizing data: 0.000152111
I0219 11:42:00.147756 4693 integrator.cpp:356] Cycles adaptive sampling: automatic min samples = 64Is that normal?
EDIT 3:
Could it be that the CUDA version in the latest driver is incompatible with the version they used to build Blender?
The official Arch package lists "cuda" as an optional dependency and as a build dependency, but the CUDA package itself is "flagged out-of-date on 2022-12-09" (the version is 11.8.0).
Also, Blender's official docs ("Building Cycles with GPU Binaries") state:
"Install CUDA Toolkit 11. Newer versions may work but are not tested as well"
When I run nvidia-smi I get that my CUDA version is 12.0.
Last edited by soaliar (2023-02-19 15:12:51)
Offline
Just tried that, with the base cube scene, and no errors. When rotating the viewport camera, it takes around 3 seconds to process all 1024 samples.
Are you sure you have GPU Compute enabled as well?
sys2064
Offline
When I run nvidia-smi I get that my CUDA version is 12.0.
I have V12, too, and have no issues with either repository version of Blender, or standalone downloaded from blender.org, so it isn't that (although I use dual GTX1070, not an RTX card, but shouldn't make a difference).
Ryzen 5900X 12 core/24 thread - RTX 3090 FE 24 Gb, Asus B550-F Gaming MB, 128Gb Corsair DDR4, Cooler Master N300 chassis, 5 HD (2 NvME PCI, 4SSD) + 1 x optical.
Linux user #545703
/ is the root of all problems.
Offline
soaliar wrote:Just tried that, with the base cube scene, and no errors. When rotating the viewport camera, it takes around 3 seconds to process all 1024 samples.
Are you sure you have GPU Compute enabled as well?
Yes, that's enabled too. But still 7 seconds per frame.
Offline