I'm giving glXWaitVideoSyncSGI a go in place of glFinish. It seems to improve responsiveness, both in general and with the Chromium problem (though not completely), without the spike in CPU usage. Probably doesn't do much for what sekret sees, however. Hopefully it makes things less bad overall.
]]>compton
and watched top. Basically there's no difference in cpu consumption with or without compton. So I think, this command itself isn't enough, so I used
compton --vsync opengl --glx-use-copysubbuffermesa
like mentioned above by Jedipottsy. Now the cpu consumption of X.bin goes up from ~0.5 to ~1.5 when the system is idle. Compton itself only consumes ~0.3 %.
What I did with dwm was to move a floating window around to see the behavior. With both ortle and compton the window lags a bit behind the mouse pointer. I think that's normal with compositors. But the lag is bigger with ortle.
Here's my output of glxinfo -v. Hope there's something useful for you in it. I'm no coder, so unfortunately I cannot give you more help..
]]>I think it's likely. The amount of work that Ortle itself does isn't huge in theory (process pending events; issue opengl commands; wait and swap), so the apparent CPU usage on machine A could be smaller than machine B if, for example, all gl* commands are just passed to a command buffer (and return immediately) on machine A but are processed entirely before returning on machine B. The actual CPU usage could very well be equal in both cases, but it depends on the driver. It's something I will try to minimize as time goes on, but it will have to wait until I solidify the basic functionality.
It would be interesting to know how much CPU compton (using its GLX backend) uses for you as a comparison, if you don't mind spending the time to look. Also useful would be the output of glxinfo -v (it's in the mesa-demos package). The more information you can flood me with, the better
@Jedipottsy
Would you mind confirming whether or not the lag issue happens when Chromium's hardware acceleration is disabled? The setting is near the bottom of the chrome://settings page after you click 'show advanced settings.' The output of glxinfo -v (it's in the mesa-demos package) would also be helpful.
The way you explain it, triple buffering could be useful, but still requires far more thought, coding and testing on my part. The mysterious thing about the Chromium issue - assuming your problem is the same as what I can reproduce - is that glXSwapBuffers waits several frames before completing the swap when moving another window over the Chromium window. Why it waits like that is still a whodunit, but I'm working on it.
]]>My understanding with triple buffering is that while swapping the buffer is fast no matter the solution, you dont have to wait for either the vsync to initate a buffer draw or wait for the buffer to finish rendering to swap the buffer, as there should always be an up to date buffer to swap with.
I.e., if a frame is still drawing and can't be swapped out, the other finished buffer will be swapped, and the rendering doesnt have to wait for V-sync to start or stop. But like i said i've no idea about anything GL related, i just know it provides fast tear free rendering on fullscreen applications/games with negligible mouse acceleration.
Edit
The issues persist in chrome, no idea why i thought it wouldnt
Could it be, that on your system and its hardware configuration + the closed source nvidia driver, ortle uses the gpu more/better than on my system with AMD and open source driver? That might cause the difference, right?
]]>Thanks for noticing it.
Also, I think I had to install mesa manually. It's definitely needed for the GLX headers, but shouldn't be needed to run (which I think libgl takes care of).
*Edit: with usage spikes whenever configure events (e.g. moving and resizing) came in.
]]>edit: Since I'm a dwm user and most probably won't ever want to use a compositor, I'd be happy to give the package to someone else. I just was in the mood to create a package.
Another thing, is it normal that the compositor uses about 10% of my 2x 1,6GHz CPU when NOTHING is happening on screen? What a waste of ressources!
]]>I was able to reproduce the slow rendering problem with Chromium + hardware acceleration (but not without). Not 100% sure why yet, but throwing a glFinish before swapping buffers sped things up for me. Please give the latest a shot!
I'll look into triple buffering. I really want to say that it probably won't help because swapping buffers is currently (with glFinish) an order of magnitude faster than the actual rendering, but the GL implementation is a black box so what do I know.
@sekret
It is BSD, and I'm really glad to hear that it's working on AMD. Thanks for the PKGBUILD as well. Would you please change 'mesa-libgl' to 'libgl'? And I think that 'libxext' should be added as a dependency as well.
]]>I put in dependencies that showed up during the make process and with the help of namcap in a clean chroot environment. It runs just fine on my system with an AMD card with the open source driver.
The license is BSD, right?
]]>http://en.wikipedia.org/wiki/Multiple_b … _buffering
Edit
I created a quick PKGBUILD and gave this a try. Moving windows around is incredibly laggy, with a 1-2 second delay.
This works flawlessly for me, not sure if its any help
compton --vsync opengl --glx-use-copysubbuffermesa -b
Edit 2
Upon further testing it appears as though windows move normally under most conditions, however once they move above some windows they slow to a massive crawl. The best example is moving something over the chromium window.
]]>Ortle: ERROR: Foreign exception caught:
Ortle: ERROR: Compatible framebuffer not found.
Ortle: ERROR: Shutting down...
It'd be a shame if the lag can't be fully removed. I'll continue without a compositor then, the occasional tearing in windowed videos doesn't bother me as much as the lag does.
]]>The opengl-swc (SGI_swap_control) and opengl-mswc (MESA_swap_control) methods work best, moving windows is smooth with almost no lag. Still, the lag bothers me.
It bothers me, too, and I don't think I can get rid of it entirely.
Moving windows, as I handle it at least, is probably going to result in the same sort of lag because I can't tell OpenGL to render a quad where the window is now, I have to tell OpenGL to render the quad where the X server last told me the window was. Maybe if Ortle was a window manager, too, (and could therefore be in charge of ConfigureRequests) I could get rid of it, but that's not the case. The best I've done is try to make it as smooth as possible. This is a long way of saying that if compton looks smooth to you, Ortle will probably only be about that "good" as well.
A possibly interesting note is that this is why glXBindTexImageEXT (GLX_EXT_texture_from_pixmap) is so important (pretty sure compton's GLX backend uses this). I don't have to update window damage or anything at all, the server keeps the texture automatically up to date, and so you don't (well, you shouldn't) see any lag in movies or games. Or, using terms from the previous paragraph, I can tell OpenGL to render a window's quad as it is now.
]]>