I also wrote a small program that measures time of some random load. Never saw a spike there, so I'm guessing it's not the system clock doing some hiccuping, and is still purely connected to apps using the GPU.
]]>=> Get mutter out of the way and try eg. openbox (though a dedicated X11 server should have done that, but anyway ...)
I tried with openbox and it still happens.
Also I was probably too hasty to say that it does not happen on Vulkan. I tried out this another benchmark and it does report frame times sometimes way past a second, with no visible animation glitches though.
Makes me really wonder about if there really is a clock problem with too long times being reported sometimes. I'll make up a fast piece of code to test it out.
]]>It does not look like the rendering stalls at any moment. Rather as if some render buffer outperforms an output buffer and is then dropped when the queue is full.
What window manager is this? Is there any compositor running? Do you enforce the full composition pipeline (or trigger it inadvertently by eg. scaling the output)
Do you have triple buffering enabled in the nvidia driver? (Option "TripleBuffer" "true")? Does it make a difference?
Did you disable buffer flipping (checkbox in nvidia-settings)?
Yeah it does not look like it but other benchmarks (furmark as one) report hundreds of ms as frame time even though it clearly never takes that long to render the image.
I'm using Gnome 3 with default settings out of the box. Haven't touched composition settings, so it's using whatever comes by default with the DE package group.
Enabling triple buffering increases the average and max FPS quite a bit in the unigine-heaven benchmark with the same settings as in the video I recorded, but the hiccup still happens and FPS goes down to 9 occasionally.
Buffer flipping setting does not seem to have any effect.
Have you tried running those benchmarks in their own x-server ?
I did try this, and the glitch still happens.
Do you think you could make a video of this?
I made a video: https://youtu.be/uk_IduqBZ2Q
It happens at about 0:19. Keep an eye on the FPS and Min FPS.
You need to run this from a TTY, it won't work if you try launching from inside an existing X session.
Ah, right. Of course. Thanks!
]]>Lone_Wolf wrote:Have you tried running those benchmarks in their own x-server ?
I tried this quickly, and wasn't able to run another xorg server because of this:
/usr/lib/xorg-server/Xorg.wrap: Only console users are allowed to run the X server
Didn't have time to figure out why yet. Will look at it when I have some spare time.
You need to run this from a TTY, it won't work if you try launching from inside an existing X session.
]]>Have you tried running those benchmarks in their own x-server ?
I tried this quickly, and wasn't able to run another xorg server because of this:
/usr/lib/xorg-server/Xorg.wrap: Only console users are allowed to run the X server
Didn't have time to figure out why yet. Will look at it when I have some spare time.
Do you think you could make a video of this?
I guess this is a possibility. I need to do some research of video recording. I'll make a video of it if it's easy enough, sure.
Does it also happen with ultra-light GL clients like glxgears?
Yes.
Do you have a high frequency output (144Hz or so) or g-sync?
No. I have two HP ZR24w monitors with 60Hz and 1920x1200 resolution.
Is this an optimus system?
I don't think so, no. From the wiki page it looks to be about running two different GPUs at once. My only GPU in the system is my 1080Ti.
Does this maybe sync to ntp updates?
I'm not sure how to check this. How I understand ntp though, is that it shouldn't make such jumps if the change is small enough. Instead it'll span the correction over some time, like minute or minutes. I may also be wrong about this so I'll check it out.
Thanks!
]]>Have you tried running those benchmarks in their own x-server ?
]]>A friend of mine is guessing it's about how linux spreads cpu time for threads, but I'd say that's pretty far-fetched idea as I have no other load whatsoever when running the OpenGL benchmarks. I am still out of ideas on my own.
]]>A co-worker found this for me: https://communities.intel.com/thread/117546
I tried disabling turbo and giving more voltage to the CPU, but that did not help it. Before all that I updated my BIOS to the latest, as I noticed my BIOS version before shouldn't have supported my CPU. Problem still exists.
]]>Something IO-intensive running in the background could also cause this.
]]>