The reason is that nvidia performs a busy wait. Only setting __GL_YIELD="USLEEP" avoids this The default and especially also "NOTHING" will boost CPU usage for nothing. Also "NOTHING" will rather steal cpu slices from KWin's core functionality. The next issue i found is that libkwinnvidiahack is ineffective - it's loaded and executed but (perhaps due to libkdeinit_kwin?) apparently too late since setting the env there has absolutely no impact while setting it on the terminal allows me to control the CPU load for sure. The additional load is not minor, but factor 5 when playing a video here. Tasks: 1. figure how to make libnvidiahacks operative again 2. set __GL_YIELD to USLEEP and maybe some others like __GL_FSAA_MODE and __GL_LOG_MAX_ANISO (both to 0) Reproducible: Always
Git commit 031d290f9a07c533daa80547424e6d1c1b9dac5b by Thomas Lübking. Committed on 23/07/2013 at 20:34. Pushed by luebking into branch 'KDE/4.11'. prevent yield/swap cpu overhead on nvidia REVIEW: 111663 M +12 -0 kwin/eglonxbackend.cpp M +12 -0 kwin/glxbackend.cpp http://commits.kde.org/kde-workspace/031d290f9a07c533daa80547424e6d1c1b9dac5b
Git commit a6b8844eacc7734cd623fe40ff2114009b583165 by Thomas Lübking. Committed on 03/08/2013 at 14:17. Pushed by luebking into branch 'KDE/4.11'. remove nvidiahack lib 1. it apparently is ineffective 2. if it was effective, it's current behavior would be not exactly helpful (sets __GL_YIELD to NOTHING, causing busy waits on doublebuffer swapping) 3. it does for sure pollute the doublebuffer/usleep detection (setenv is set to override), ie. the overehad detection code gets a different opinion on __GL_YIELD than libGL REVIEW: 111858 M +0 -14 kwin/CMakeLists.txt D +0 -52 kwin/nvidiahack.cpp http://commits.kde.org/kde-workspace/a6b8844eacc7734cd623fe40ff2114009b583165
*** Bug 323646 has been marked as a duplicate of this bug. ***
Created attachment 81758 [details] KWin loader script Attached a loader shell script that sets the require enviroment vars, tries to disable FXAA and launches kwin. Place it somewhere up in $PATH (eg. ~/bin often is) to shadow the kwin binary.
The Kwin loader script didn't work for me, causing windows to have no decorations or compositing. Adding 'export __GL_YIELD="USLEEP"' to /etc/profile fixed bug 323646 (marked as a duplicate of this bug) for me.
Sounds as if the real kwin binary is not found in the exec paths, ie. kwin not started? > IFS=':' EXEC_PATHS=`kde4-config --path exe` > for BINARY in ${BINARIES}; do > for EXEC_PATH in ${EXEC_PATHS}; do > if [ "${EXEC_PATH}${BINARY}" = "$THIS_BIN" ]; then > continue; > fi > if [ -e "${EXEC_PATH}${BINARY}" ]; then > echo "$THIS_BIN started ${EXEC_PATH}${BINARY}" > /tmp/kwin.log About triple buffer detection: if you've enabled it for sure and it works for sure, "export KWIN_TRIPLE_BUFFER=1" instead. Does the "kwin (1212)" debug (run kdebugdialog) output indicate that "Triple buffering detection: NOT available"?
Created attachment 81771 [details] kwin debug output I have no idea how to verify whether or not triple-buffering is indeed enabled, but I do have it as an option in my xorg.conf file. The X.org log file at least acknowledges that the option is recognized, but I don't see anything beyond that to indicate that triple-buffering is actually active. A Google search for a way to verify this has turned up nothing. I have run kwin with the debug info enabled and have attached the output as well as my Xorg log file.
Created attachment 81772 [details] X.org log file
Created attachment 81773 [details] my xorg.conf here's my xorg.conf as well
triple buffering is enabled by the driver. The KWin output is too short - it takes a few seconds (many frames) to figure whether triple buffering is enabled or not (there seems no legal way, so we just measure how long swapping takes - fast return means swapping) Does the screen actually run at 100Hz (CRT?)
Well, after I copied the kwin debug output for you, I got this: kwin(4569) KWin::SwapProfiler::end: Triple buffering detection: "NOT available" - Mean block time: 4.39506 ms Looks like I just didn't wait long enough. Yes, I had to create metamodes to get ideal refresh rates from my CRT. It does run at 100 Hz in this resolution (1280 x 960). Need to have high refresh for my 3D shutter glasses :)
Not to mention, kwin running buttery-smooth at 100 Hz is a thing of beauty to behold :)
I've been bitten by this bug as well. This worked just fine up to the final 4.11 release. Now I either have to set KWIN_TRIPLE_BUFFER=1 or set the tearing prevention to None and back to Automatic (or whatever I want) before it becomes active again after the login. Triple buffering is active and set, by the way.
Seems like this is a race condition of some kind. After kde is up and running, doing a "kwin --replace" will fix the problem as well and properly activate the tearing prevention (w/o the KWIN_TRIPLE_BUFFER set): kwin(8121) KWin::SwapProfiler::end: Triple buffering detection: "Available" - Mean block time: 0.234613 ms
I replaced 'export __GL_YIELD="USLEEP"' with 'export KWIN_TRIPLE_BUFFER=1' in my /etc/profile, and it works. I get full v-sync on login. To answer your previous question about the kwin loader script file, I had placed the script in /usr/local/bin. The true kwin binary is in /usr/bin.
(In reply to comment #14) > running, doing a "kwin --replace" will fix the problem as well and properly > activate the tearing prevention This would mean that buffer swapping takes more time under load, could be due to X11 or the driver cannot flip We better delay the measuring... Just to be sure: you've flipping allowed in nvidia-settings ("OpenGL Settings" page)?
(In reply to comment #15) > I replaced 'export __GL_YIELD="USLEEP"' with 'export KWIN_TRIPLE_BUFFER=1' > in my /etc/profile, and it works. I get full v-sync on login. Notice that if swapping indeed blocks for you (for what reason ever, but the measured time suggests such) this might show inferior performance to exporting __GL_YIELD="USLEEP" instead. 1. because nvidia then would perform a busy wait (causing CPU load) 2. because kwin would approach the swap at rather random times and often miss frames, resp. waste a lot of time waiting for the sync. > To answer your previous question about the kwin loader script file, I had > placed the script in /usr/local/bin. The true kwin binary is in /usr/bin. Did you get any output to /tmp/kwin.log? I could only assume that /usr/bin is not in the output of "kde4-config --path exe" ... or /bin/sh is not bash and i've some bash/zsh slang in there ;-) (try replacing the shebang with #!/bin/bash)
(In reply to comment #17) > Notice that if swapping indeed blocks for you (for what reason ever, but the > measured time suggests such) this might show inferior performance to > exporting __GL_YIELD="USLEEP" instead. Okay, I have switched back to __GL_YIELD="USLEEP" and kwin's CPU usage when dragging a window quickly in circles has gone down slightly (from 3-4% down to 1-2%). Any reason why both the "__GL_YIELD" and "KWIN_TRIPLE_BUFFER" flags shouldn't be set? > Did you get any output to /tmp/kwin.log? > I could only assume that /usr/bin is not in the output of "kde4-config > --path exe" ... or /bin/sh is not bash and i've some bash/zsh slang in there > ;-) > (try replacing the shebang with #!/bin/bash) Replacing the first line with "#!/bin/bash" fixed it for me.
(In reply to comment #18) > to 1-2%). Any reason why both the "__GL_YIELD" and "KWIN_TRIPLE_BUFFER" flags shouldn't be set? No. USLEEPing is othogonal to triple buffering (or even it's detection) - and probably the better choice to distribute the CPU between KWin and the GL driver. > Replacing the first line with "#!/bin/bash" fixed it for me. I guess /bin/sh is dash?
(In reply to comment #19) > (In reply to comment #18) > > > to 1-2%). Any reason why both the "__GL_YIELD" and "KWIN_TRIPLE_BUFFER" flags shouldn't be set? > > No. USLEEPing is othogonal to triple buffering (or even it's detection) - > and probably the better choice to distribute the CPU between KWin and the GL > driver. Sorry, I don't understand your answer, particularly the use of the word "orthogonal". It seems that you are saying that the two methods don't have anything to do with one another, and the USLEEP method will give superior performance to triple-buffering. > > Replacing the first line with "#!/bin/bash" fixed it for me. > I guess /bin/sh is dash? Yes, it is.
(In reply to comment #20) > It seems that you are saying that the two methods don't have > anything to do with one another Yes. > and the USLEEP method will give superior performance to triple-buffering. No, it's more complex than this. If triple buffering works, that means the driver doesn't have to wait for a swap. "No wait" means "no busy wait" and that means no pointless CPU load, regardless of the yielding strategy. It also means we don't miss a swap, nor waste any time (w/ or w/o cpu load) on waiting for it (free for all the other action the WM has to do - like repositioning a window when you move it around) => triple buffering is a good idea if your GPU has sufficient VRAM (what's noadays usually the case) Selecting USLEEP as yielding strategy means that the driver neither occupies too many CPU slices ("stealing them from the actual process" - as with the "NOTHING" strategy) nor may loose the CPU for unpredictable amounts of time (as with the default sched_yield() call) => For the WM scenario, i consider USLEEP to be the better yielding strategy - regardless of that it's actually required to prevent the busy wait on double buffering setups.
(In reply to comment #16) > This would mean that buffer swapping takes more time under load, could be > due to X11 or the driver cannot flip Still, there is something more going on, if you ask me. A simple "kwin --replace" works 9 out of 10 times but not always - even if the system is totally idle. Something is racy. In that rare case it does not work initially, it always works the second time... and going through the system settings has never failed to work at all. > Just to be sure: > you've flipping allowed in nvidia-settings ("OpenGL Settings" page)? Yes. Even though nowadays I don't use nvidia-settings for anything anymore but I checked and put it in the Autostart to load the configuration... but it made no difference at all. One quick note about the __GL_YIELD="USLEEP": With today's kernels, usleep(0) is a noop and has been since the introduction of the HPETs, afaik. Put differently, the thread will run off its time slice just like there was no usleep(0) at all.
Hi ! I would just like to confirm that just enabling triple buffering is not enough for not getting tearing (have to switch openGL mode, for instance -- just disabling-re-enabling desktop effects is not enough either). (I have intermittent tearing with nvidia proprietary drivers since 4.11.0)
*** Bug 323817 has been marked as a duplicate of this bug. ***
*** Bug 323835 has been marked as a duplicate of this bug. ***
Confirmed that TripleBuffer isn't enough at all, that alone doesn't prevent KWin from tearing massively (never seen tearing in KDE before 4.11), just like mentioned in #23.
(In reply to comment #26) > Confirmed that TripleBuffer isn't enough at all Because the setting is casually misdetected at startup (there's no legal way to ask for it) and then tearing prevention is forcefully turned off. That's already common sense. See comment #15 on how to workaround that.
*** Bug 324049 has been marked as a duplicate of this bug. ***
This patch defers the triple buffering detection, i'd appreciate if anyone suffering from the misdetection on login could apply and test it, thanks. https://git.reviewboard.kde.org/r/112308/
I haven't had the chance to try the fix from comment #29, but I can inform that the tearing is not limited to nVidia only. I did an upgrade to 4.11 on my laptop that has optimus graphics (nVidia/Intel) and I get the same tearing on login with Intel card enabled only: user@user-laptop:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108M [GeForce GT 525M] (rev ff) user@user-laptop:~$ However, in this case, changing the compositing type to OpenGL 3.1 and back to OpenGL 2.0 does not help. What helps in this case is changing the tearing prevention (I selected "Re-use screen content"). The tearing immediately dissapeared. Just wanted to provide more info. I am not that versed in how to apply the path so I need some more (newbie) info on that. Kind Regards, Veroslav
me too, i would also need some more newbie info to apply the fix.
(In reply to comment #30) > I haven't had the chance to try the fix from comment #29, but I can inform > that the tearing is not limited to nVidia only. unrelated. > What helps in this case is changing the tearing prevention The default strategy for intel chips is "only when cheap", what for the moment means "only when large scree portions are repainted" - like fullscreen video playback or scrolling in a maximized browser. > (I selected "Re-use screen content") Do not use that strategy with mesa drivers! From all we can say, this causes a HUUUGE overhead. Pick "force fullscreen repaints" if desired, then restart kwn (in doubt, log out and in - restarting the compositor will usually NOT be sufficient) > Just wanted to provide more info. I am not that versed in how to apply the > path so I need some more (newbie) info on that. The patch has only tearing impact on nvidia gpu's, yet might get you a more responsive compositor on others. To apply, you need to pick the sourcecode of kwin, apply the patch with "patch < patch.diff" and then compile kwin. If you generally feel up to that and just require info on details, please all back.
I tested the patch, but it doesn't solve the problem on my system. Triple-buffering is still not detected correctly during login, but is detected correctly when running "kwin --replace &" after login has completed. For the record, my system: OS: Kubuntu 12.04 64-bit w/ KDE SC 4.11.0 (from Kubuntu backports PPA) Motherboard: ASRock X58 Extreme3 (Intel X58 chipset) CPU: Intel Core i7 930 (2.8 GHz quad-core) RAM: 12GB DDR3 Video: NVIDIA GeForce 7900 GS w/ 256 MB RAM (PCI Express) Sound Card #1: Sound Blaster Audigy 2 ZS Gold Sound Card #2: Echo Gina3G Linux Kernel: 3.5.0-39-generic NVIDIA video driver: 304.88 Screen Resolution: 1280 x 960
Whoops, I stand corrected... I didn't wait long enough. After running "kwin --replace &", vsync was initially enabled, but then it became disabled and I got the following output: kwin(4163) KWin::SwapProfiler::end: Triple buffering detection: "NOT available" - Mean block time: 4.51748 ms kwin(4163) KWin::GlxBackend::present: It seems you are using the nvidia driver without triple buffering You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced swaps Preferably, enable the TripleBuffer Option in the xorg.conf Device For this reason, the tearing prevention has been disabled.
Okay, please ignore my previous two comments until I've done more testing.
Alright, so it seems that *sometimes* it will properly detect triple-buffering and sometimes not. If I restart kwin and then just wait while doing nothing, I will get this: kwin(4870) KWin::SwapProfiler::end: Triple buffering detection: "Available" - Mean block time: 0.673479 ms ... however, if I restart kwin, and then start dragging windows around the screen, I'll get this: kwin(4876) KWin::SwapProfiler::end: Triple buffering detection: "NOT available" - Mean block time: 9.76569 ms kwin(4876) KWin::GlxBackend::present: It seems you are using the nvidia driver without triple buffering You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced swaps Preferably, enable the TripleBuffer Option in the xorg.conf Device For this reason, the tearing prevention has been disabled. See https://bugs.kde.org/show_bug.cgi?id=322060
*** Bug 324190 has been marked as a duplicate of this bug. ***
(In reply to comment #36) > Alright, so it seems that *sometimes* it will properly detect > triple-buffering and sometimes not. If I restart kwin and then just wait > ... > ... however, if I restart kwin, and then start dragging windows around the > screen, I'll get this: > > kwin(4876) KWin::SwapProfiler::end: Triple buffering detection: "NOT > available" - Mean block time: 9.76569 ms I got it (USLEEP) up to only 1.40989 ms, but in general can confirm the effect. Either we approach with too many swap calls (and the driver blocks the third frame) or the simply swap function load under stress takes too long... gonna check, but i fear the latter :-(
Hi, is this really the same bug? I don't see abnormal CPU load on my system. No matter if vsync works or not. All my CPUs are at rougly 0% shortly after startup during the time where everything is synced at 60 FPS and even when it rises to 100 FPS and screen tearing appears. My system is an Intel i7 2600K 4.5 Ghz with NVIDIA GTX 580. Kind Regards, Tim 2013/8/29 Thomas Lübking <thomas.luebking@gmail.com> > https://bugs.kde.org/show_bug.cgi?id=322060 > > --- Comment #38 from Thomas Lübking <thomas.luebking@gmail.com> --- > (In reply to comment #36) > > Alright, so it seems that *sometimes* it will properly detect > > triple-buffering and sometimes not. If I restart kwin and then just wait > > ... > > ... however, if I restart kwin, and then start dragging windows around > the > > screen, I'll get this: > > > > kwin(4876) KWin::SwapProfiler::end: Triple buffering detection: "NOT > > available" - Mean block time: 9.76569 ms > > I got it (USLEEP) up to only 1.40989 ms, but in general can confirm the > effect. > Either we approach with too many swap calls (and the driver blocks the > third > frame) or the simply swap function load under stress takes too long... > gonna > check, but i fear the latter :-( > > -- > You are receiving this mail because: > You are on the CC list for the bug. >
(In reply to comment #39) > Hi, > is this really the same bug? Ultimately yes. > I don't see abnormal CPU load on my system. Because we prevent it (plus with actual triple buffering there's no wait, no busy wait, no cpu load) Disabling vsync for triple buffering misdetection is a false positive result of the CPU load preseveration. I'm pretty sure I know what causes this (misdetection - we try to paint too often) and hopefully have a patch this evening - but 4.11.1 is tagged tonight as well :-( PS: please don't quote unneeded stuff, you're posting to a bugtracker and everything is there in a list of comments anyway ;-)
Not what I thought, but ultimately the same ;-) https://git.reviewboard.kde.org/r/112368/ The patch works when starting kwin w/ ShowFPS enabled, what had been a reliable way to break detection.
*** Bug 324437 has been marked as a duplicate of this bug. ***
*** Bug 324742 has been marked as a duplicate of this bug. ***
*** Bug 324770 has been marked as a duplicate of this bug. ***
(In reply to comment #41) > Not what I thought, but ultimately the same ;-) > https://git.reviewboard.kde.org/r/112368/ > > The patch works when starting kwin w/ ShowFPS enabled, what had been a > reliable way to break detection. I applied this to kde-base/kwin on Gentoo and it fixes the issue for me. VSync is now enabled on every login.
Git commit 0c7fe70a1a89c844f8fbdcc7b3799852ad14d5cd by Thomas Lübking. Committed on 29/08/2013 at 23:30. Pushed by luebking into branch 'KDE/4.11'. fix scheduling the repaints repaints caused by effects so far polluted the timing calculations since they started the timer on the old vsync offset This (together with undercut timing) lead to multiple frames in the buffer queue, and ultimately to a blocking swap For unsynced painting, it simply caused wrong timings - leading to "well, kinda around 60Hz - could be 75 as just well". REVIEW: 112368 that part is fixed in 4.11.2 M +27 -6 kwin/composite.cpp M +10 -0 kwin/eglonxbackend.cpp M +10 -0 kwin/glxbackend.cpp http://commits.kde.org/kde-workspace/0c7fe70a1a89c844f8fbdcc7b3799852ad14d5cd
Just applied this. VSync is working fine on login.
Still getting this in 4.11.2. Am I missing something?
You still require either triple buffering enabled (iircs it's disabled by default in the nvidia driver, /var/log/Xorg.0.log will tell whether it's activated) - or export __GL_YIELD="USLEEP" by hand. The patch is only supposed to fix triple buffering misdetection. We've not found a way to convice the driver into usleep yielding after starting the process.
Unfortunately, after upgrading to 4.11.2, I am still experiencing both the tearing (starting a few seconds after the desktop loads) and high cpu load (kwin raising to 60-80% when watching a video and 30-40% when just moving some windows around). Was the patch supposed to prevent the tearing but not the high cpu load? I never had the same problem before I've upgraded to 4.11.0, so I guess that I have tripple buffering working ok?
(In reply to comment #50) > Was the patch supposed to prevent the tearing but not > the high cpu load? The patch was supposed to detect triple buffering detection. > I never had the same problem before I've upgraded to > 4.11.0, so I guess that I have triple buffering working ok? No. With either triple buffering or __GL_YIELD="USLEEP" there should be no CPU load. The TB detection doesn't matter - if you had triple buffering, the swap wouldn't have to wait, would not perform a busy wait and not load the CPU by that. run: grep -i triple /var/log/Xorg.0.log if that doesn't print sth. like [ 14.137] (**) NVIDIA(0): Option "TripleBuffer" "True" triple buffering is rather not enabled. Please check for tearing/CPU load either after triple buffering is enabled for sure or you ran export __GL_YIELD="USLEEP" kwin --replace & to prevent busy waits in the driver. However, with no swap interval set (thus tearing) there should be no CPU load by the swapping either (because it doesn't have to wait for the retrace) Maybe also check that you didn't set the MaxFPS (to sth. scary like 999) or the RefreshRate (~/.kde/share/config/kwinrc)
Hi Thomas, thank you for your reply and I apologize for somewhat late reply. I didn't get any output after I run: grep -i triple /var/log/Xorg.0.log so I decided to manually add export __GL_YIELD="USLEEP" to /etc/profile. On the following reboot I didn't notice any tearing and also, the kwin is constantly running low (3-4%), even when watching non-fullscreen videos and moving the windows around. I am very thankful! Could you please just confirm that this is the correct solution in my case (to manually add the variable to /etc/profile)? Kind Regards, Veroslav
(In reply to comment #52) > so I decided to manually add export __GL_YIELD="USLEEP" to /etc/profile. > [...] > Could you please just confirm that this is the correct solution in my case > (to manually add the variable to /etc/profile)? The overall better solution is to enable triple buffering. This is going to benefit you not only with KWin, but with OpenGL applications in general.
(In reply to comment #52) > Could you please just confirm that this is the correct solution in my case > (to manually add the variable to /etc/profile)? The "proper" way here would be to export it from an executable shell scriptlet in ~/.kde/env (executed by startkde), but exporting it in /etc/profile will do the job as well of course. Nikos is right in suggesting to activate trimple buffering, though. You may also use the kwin loader script attached to this bug (which allows you to set various environment vars affecting kwin) - by placing it into an upper position of the PATH (eg. ~/bin or /usr/local/bin) it will shadow /usr/bin/kwin on execution and set the relevant environment for KWin only (not affecting other processes, though w/o triple buffering, many or all vsyncing games should run into the same issue)
Thank you for quick replies. I was going to enable triple buffering, but I've never done this before and would need some help. If I understood it correctly, I would need to add the following line to xorg.conf: Option "TripleBuffer" "True" and remove the __GL_YIELD="USLEEP" line altogether? I don't appear to have xorg.conf file so I guess I need to create one manually. Should I just create an empty file /etc/X11/xorg.conf and add Option "TripleBuffer" "True" to it, or is it safer to simply run nvidia-xconfig, let it create it, and then add the line above? I am sorry for the noob questions, I just want to make sure that I don't destroy my system :) Thank you in advance. Regards, Veroslav
(In reply to comment #55) > Thank you for quick replies. > > I was going to enable triple buffering, but I've never done this before and > would need some help. xorg.conf is deprecated, instead add a snippet into /etc/X11/xorg.conf.d/20-nvidia.conf (your distro should rather add that file anyway) containing: Section "Device" Identifier "Default nvidia Device" Driver "nvidia" Option "NoLogo" "True" Option "CoolBits" "1" Option "TripleBuffer" "True" EndSection CoolBits allows overclocking (on some devices) an NoLogo removes the startup advertisement ;-) If you have such file and it has some other entries do not remove those, just add TripleBuffer to the "Device" section
To answer your question, Veroslav: if I'm not mistaken, enabling both triple buffering in the NVIDIA driver and __GL_YIELD="USLEEP" in /etc/profile is ideal. That way games can use Triple Buffering and kwin can use its lower-cpu sync method.
Thank you both Thomas and Christian, I've added the xorg.conf snippet posted in comment #56 to /usr/share/xorg.conf.d/20-nvidia.conf (had to create the file as it wasn't there, and also, the path to xorg.conf.d on K/Ubuntu seems to differ to the one in Thomas' example). On the restart, everything was working very well (no tearing and miminal kwin cpu). Also: grep -i triple /var/log/Xorg.0.log gave an output similar to: [ 14.137] (**) NVIDIA(0): Option "TripleBuffer" "True" Now I just need to enable __GL_YIELD="USLEEP" as Christian suggested in comment #57. Very satisfied, thanks again! Regards, Veroslav
(In reply to comment #58) > I've added the xorg.conf snippet posted in comment #56 to > /usr/share/xorg.conf.d/20-nvidia.conf You should put that file in /etc/X11/xorg.conf.d/ (create the path if it doesn't exist.)
(In reply to comment #59) That was my original thought as well, but then I read in several places that /etc/X11/xorg.conf.d is not being used anymore, so I am a bit confused as to which one it really is (many conflicting answers on this topic). Will test /etc/X11/xorg.conf.d, although the driver seems to be able to find my .conf file, as the output of grep -i triple /var/log/Xorg.0.log seems to suggest. Thank you for the input, Nikos.
(In reply to comment #60) > That was my original thought as well, but then I read in several places that > /etc/X11/xorg.conf.d is not being used anymore It is used. It's the standard place where the X.Org server is looking for configuration files and it deprecates the old, monolithic /etc/X11/xorg.conf file.
> [xorg.conf.d] is used. It's the standard place where the X.Org server is looking for > configuration files and it deprecates the old, monolithic /etc/X11/xorg.conf > file. To be clear, "xorg.conf.d" is a folder that may contain a set of files that configure various devices, cards etc. It's no longer one file (i.e. "xorg.conf" the file is old [deprecated] and not used). That all said, I found that my install of Kubuntu 13.04 *is* using xorg.conf, so that's the file I updated with the details from#56 (I am using the nvidia proprietary drivers). I also added this line into the start of /etc/profile from #51: export __GL_YIELD="USLEEP" KDE now starts for me with no tearing. No need to go into "System Settings/Desktop Effects/Advanced", changing the compositing type, "Apply" and then "Revert" to get a tear free display.
*** Bug 326264 has been marked as a duplicate of this bug. ***
Just hit by this. I was wondering why anti-tearing (which works fine on my system) seems to get disabled after a while for no apparent reason. After reading the debug output from "kwin" >> It seems you are using the nvidia driver without triple buffering... things became clearer. Unfortuntely neither setting __GL_YIELD=USLEEP (causes major fps drops and stuttering during gaming) nor activating triple buffer (adding another 16 ms to the overally latency) is acceptable for me, while the "cpu spikes" are perfectly fine, because the have no noticable negative impact on my box (Desktop, i7@4.6 GHz, GTX 570). I understand that for a laptop this may be very different, but forcefully disable vsync to "protect" users is IMHO the wrong solution, a log message is sufficient and my be the right thing. I have adopted the solution of starting kwin with a script which sets __GL_YIELD=USLEEP which seems to work fine and make sure __GL_YIELD isn't set anywhere else. I use __GL_THREADED_OPTIMIZATIONS=1 (which is a nice 50% fps improvement during gaming essentially for free), but I have no idea how it affects kwin, maybe thats the reason the "cpu spikes" I have seen in the first place didn't have any negative impact on performance.
(In reply to comment #64) It's also possible to utilize nvidia-settings application profiles for this. > I use __GL_THREADED_OPTIMIZATIONS=1, but I have no idea how it affects kwin "depends" LD_PRELOAD="libpthread.so.0 libGL.so.1" __GL_THREADED_OPTIMIZATIONS=1 kwin --replace & -> see https://git.reviewboard.kde.org/r/111354/ As for the default handling of this: "not working vsync" would a "remaining" issue while "kwin suddenly eats 30% cpu" would be a severe regression - and for default behavior, we got to care more about the users who know "where the power button is" than those who know how to use google and tune their system anyway.
(In reply to comment #65) > (In reply to comment #64) > > It's also possible to utilize nvidia-settings application profiles for this. I created a profile with key "GLYield" and a value "USLEEP" which has (should have) the same effect as setting __GL_YIELD, but unfortunately (and probably expected) kwin still disables antitearing because the environment variable is not set, even if the functionality is there. Is it possible to set KDE_TRIPLE_BUFFERING even while triple buffering is disabled to avoid this? > > > I use __GL_THREADED_OPTIMIZATIONS=1, but I have no idea how it affects kwin > > "depends" > -> see https://git.reviewboard.kde.org/r/111354/ I have disabled it now, as ist is not clear how many gl* calls which are returning information from a context are in kwin. BTW, in wine this can be quite easily avoided during runtime with WINEDEBUG=-all. > > As for the default handling of this: I understand :-) Anyway, there are multiple workarounds for me.
(In reply to comment #66) > Is it possible to set KDE_TRIPLE_BUFFERING even while triple buffering > is disabled to avoid this? NO! The env only exists because triple buffering detection is heuristic. If you "lie" about this, kwin will enter a non-blocking path while glSwapBuffers will actually block. You'll run off-sync and spend more and more time in (CPU expensive) waiting for the buffer swap. > kwin still disables antitearing because the environment > variable is not set, even if the functionality is there. Ok, I frankly never tested but had just expected the driver would set that environment from the profile.
> > kwin still disables antitearing because the environment > > variable is not set, even if the functionality is there. > Ok, I frankly never tested but had just expected the driver would set that > environment from the profile. I believe it is not possible a dynamic library (at least not easily) to change the environment of a calling process. The other way around, setting __GL_YIELD from kwin for libnvidia-glcore.so or libGL.so before the libraries are actually loaded could be possible with some evil (and non portable) ldopen() trickery. I am not sure how I would approach this, maybe making kwin only a stub which sets the environment and loads the real kwin-worker.
The issue with vsync is back. It doesn't get consistently applied on login. KDE 4.11.4. Triple buffering is enabled: $ grep -i triple /var/log/Xorg.0.log [ 21.596] (**) NVIDIA(0): Option "TripleBuffer" "True"
With what kind of change (KDE update from - to, etc.) For what measured swap time? (Watch debug output for 1212, activate it in "kdebugdialog") For a hotfix, "export KDE_TRIPLE_BUFFERING=1"
(In reply to comment #70) > With what kind of change (KDE update from - to, etc.) > For what measured swap time? (Watch debug output for 1212, activate it in > "kdebugdialog") Not sure exactly. It's been a while since I used the Linux installation. I think I upgraded from 4.11.2 to 4.11.4. At the same time, I also upgraded the GPU from a GTX 560 Ti to a GTX 780. There were lots of other updates as well (kernel, X.Org, tons of libraries, NVidia driver.) kwin says: KWin::SwapProfiler::end: Triple buffering detection: "Available" - Mean block time: 0.11997 ms The above is with vsync working. Should I try to reproduce the problem and then get the debug message again? It happens somewhat rarely (1 out of 10 logins or so.)
(In reply to comment #71) > The above is with vsync working. Should I try to reproduce the problem and > then get the debug message again? Yes please. Use "kdebugdialog --fullmode" to redirect kwin (1212) output to a file for this purpose. Smells like someth. hangs (for other reasons) and you miss the clip value (1ms) by few ns or so.
*** Bug 328781 has been marked as a duplicate of this bug. ***
It's been four days now and I wasn't able to reproduce this again even once :-/ Although I did now upgrade to KDE 4.12.0, kwin is still at 4.11.4 (and AFAIK, there will be no 4.12 release of kwin). Unless the update did change something that would result in the bug not triggering anymore. Anyway, on a related matter: since I upgraded my GPU to a GTX 780, performance has increased immensely. Triple buffering was very useful before, as falling below 60FPS would introduce quite some input lag and TB would mitigate this at the cost of an additional frame of input lag in 60FPS situation. That was the lesser evil, so TB was a good thing. Now, 60FPS is a given in pretty much everything I care to throw at the graphics card. The only thing TB now does is the additional input lag, which isn't a compromise anymore, but a clear cut drawback. Playing "Metro Last Light" or "Left 4 Dead 2" for example with TB enabled makes the mouse input feel "floaty." This isn't as important in 3D applications, but in games, it's quite important for the controls to feel "snappy." So the question is: is/will kwin be able to cope with this? Linux as a gaming platform is becoming more important as of late.
(In reply to comment #74) > It's been four days now and I wasn't able to reproduce this again even once > :-/ Although I did now upgrade to KDE 4.12.0, kwin is still at 4.11.4 (and > AFAIK, there will be no 4.12 release of kwin). Unless the update did change > something that would result in the bug not triggering anymore. I have seen this performance drop once - after a few hours of gaming in WoW. ctrl+alt+f1 and back to X fixed it eventually - it may be possibly a nvidia driver bug. > > Anyway, on a related matter: since I upgraded my GPU to a GTX 780, > performance has increased immensely. I would expect that ^^ > So the question is: is/will kwin be able to cope with this? Linux as a > gaming platform is becoming more important as of late. I do not believe that generic linux distros will be of any importace for the gaming software industry ever - but anyway, here is my "bugfix": Disable triple buffering generally and create an executable shell script #!/bin/bash __GL_YIELD=USLEEP /usr/bin/kwin in /usr/local/bin/kwin, and make sure /usr/local/bin is before /usr/bin in your $PATH. So I get all, low cpu usage, 60 fps in desktop effects, and lag-free gaming.
*** Bug 329297 has been marked as a duplicate of this bug. ***
I just updated to KDE 4.12.2, and Vsync no longer works when tearing prevention is set to "Automatic" in the desktop effects settings. I have the following in my /etc/profile: export __GL_YIELD="USLEEP" Vsync was working fine before the update from 4.12.0 to 4.12.2. Turning on kwin logging doesn't seem to provide anything helpful. Vsync does work, however if I select "Full scene repaints" for kwin tearing prevention. I'm not sure if this is a good thing or not. Are there any downsides to using "Full scene repaints" vs. "Automatic"? My gfx card: NVIDIA GeForce 7900 GS w/ 256 MB RAM (PCI Express) NVIDIA video driver: 304.116
Likely https://git.reviewboard.kde.org/r/115523/ The legacy 304.xxx driverd don't provide "glxinfo | grep buffer_age", do they?
glxinfo | grep buffer_age ...returns nothing on my system.
It's the bug covered by that patch then (and has nothing to do with yielding, triple buffering etc. it's just broken) Patch will be in 4.11.7 (released on March, 5th) Full scene repaints has slight overhead (in case anyone reads this: "on the nvidia blob", on MESA frontbuffer copying is unsuable) - and more, if you make "excessive" use of blurring.
Git commit b00cc9cda191795ceae874526c7bd57b2a832982 by Thomas Lübking. Committed on 05/02/2014 at 23:16. Pushed by luebking into branch 'KDE/4.11'. fix frontbuffer copying swap preference REVIEW: 115523 Related: bug 330794 M +1 -1 kwin/scene_opengl.cpp http://commits.kde.org/kde-workspace/b00cc9cda191795ceae874526c7bd57b2a832982
*** Bug 331720 has been marked as a duplicate of this bug. ***
This is from the changelog of the latest NVidia driver release (334.21). Is this relevant or totally unrelated? "Fixed a bug in the GLX_EXT_buffer_age extension where incorrect ages would be returned unless triple buffering was enabled."
That should be related to bug #330794 It has no impact on the busy wait (which i didn't test for two driver pointreleases, though)
I'm still having this issue in KDE 4.12.3. Is the patch in comment 81 included in this version?
There's no 4.12 version of kwin - you'll have to compare "kwin --version" (and require 4.11.7)
Aah... I'm only on version 4.11.6. Thanks for the info.
I'm on kwin 4.11.8 now and I can confirm that vsync is working again on my system.
Changelog of the latest Nvidia driver: "Fixed a performance regression when running KDE with desktop effects using the OpenGL compositing backend." https://devtalk.nvidia.com/default/topic/748536
bug #244253 - i don't think it will be the busy wait (but just updated the driver - wait for reboot ;-)
I had a big performance regression after the last driver update and hadn't gotten around to debugging it. I won't be on the computer with the nvidia card for another day or so, so I can't test the new one yet, but it's probably that, and not the sleeping business. usleep is still working great for me, though, as far as this issue is concerned.
I've been suffering the tearing in KDE since many versions ago. I've had a nVidia GeForce 9600GT and now I've a nVidia GeForce 210. I'm running Kubuntu 14.04 (KDE 4.13.2, KWin 4.11.10). Everytime that my computer starts or re-starts, the tearing re-appears and I've to change the OpenGL implementation in order to recover the vertical synchronization, no matter which version I set (1.2, 2.0 or 3.1). I'm using the official nVidia drivers, version 331.38 (without updates). Is it related to this bug? If it isn't, apologizes.
(In reply to negora from comment #92) > I've been suffering the tearing in KDE since many versions ago. I've had a > nVidia GeForce 9600GT and now I've a nVidia GeForce 210. I'm running Kubuntu > 14.04 (KDE 4.13.2, KWin 4.11.10). Everytime that my computer starts or > re-starts, the tearing re-appears and I've to change the OpenGL > implementation in order to recover the vertical synchronization, no matter > which version I set (1.2, 2.0 or 3.1). I'm using the official nVidia > drivers, version 331.38 (without updates). Is it related to this bug? If it > isn't, apologizes. It is related. __GL_YIELD="USLEEP" "fixes" it. Create file.sh with similar content: # /bin/sh export __GL_YIELD="USLEEP" kwin --replace & Then add this file to KDE startup. My Kwin is set to OGL 3.1 + Reuse content + Raster
Thank you Piotr Kloc. It was very helpful! I hope that the KDE team is able to solve it definitively in a future release. Thank you for your hard work.
Hi (In reply to Piotr Kloc from comment #93) > (In reply to negora from comment #92) > > I've been suffering the tearing in KDE since many versions ago. I've had a > > nVidia GeForce 9600GT and now I've a nVidia GeForce 210. I'm running Kubuntu > > 14.04 (KDE 4.13.2, KWin 4.11.10). Everytime that my computer starts or > > re-starts, the tearing re-appears and I've to change the OpenGL > > implementation in order to recover the vertical synchronization, no matter > > which version I set (1.2, 2.0 or 3.1). I'm using the official nVidia > > drivers, version 331.38 (without updates). Is it related to this bug? If it > > isn't, apologizes. > > It is related. __GL_YIELD="USLEEP" "fixes" it. > > Create file.sh with similar content: > > # /bin/sh > export __GL_YIELD="USLEEP" > kwin --replace & > > Then add this file to KDE startup. My Kwin is set to OGL 3.1 + Reuse content > + Raster Thanks so much! I also had this exact same problem. Terrible tearing after every reboot. The only why was to manually set the renderer to OGL 2.0 and then back to OGL 3.1 to resolve this. However your script works perfectly and now I don't have any more tearing :)
There's no need to replace kwin if you set it early enough. If you put the script in ~/.kde/env instead of Autostart (or from the gui, "Pre-KDE startup" instead of "Startup") it'll run before kwin starts, and all you need is the export.
I'm hit by this bug as well. System: Debian testing KDE 4.14.1 Kwin: 4.11.12-2+b1 Nvidia GeForce GT 620, driver 343.22 Is there a way to set the workaround for KWin only? Putting __GL_YIELD='USLEEP' in a script in $HOME/.kde/env sets it to all applications which run later, which may be not a good idea for some other cases.
Also, for me this issue surfaces when I switch to tty console from the main session or lock the screen (with a smple KDE locker). All other times CPU usage is normal.
You hit it all the time, just in those occasions the sync "lasts" long enough (forever) to really show off. Check the kwin script from here: https://github.com/luebking/KLItools Put it somewhere up in $PATH so that it shadows the kwin binary (/usr/local/bin, evtl. ~/bin) - ensure it's executable.
*** Bug 341166 has been marked as a duplicate of this bug. ***
(In reply to Thomas Lübking from comment #99) > You hit it all the time, just in those occasions the sync "lasts" long > enough (forever) to really show off. > > Check the kwin script from here: > https://github.com/luebking/KLItools > > Put it somewhere up in $PATH so that it shadows the kwin binary > (/usr/local/bin, evtl. ~/bin) - ensure it's executable. I'm using Fedora 21 64 bit. My KDE version is 4.14.3. When I use that script my Menu bar disaapeared. I've come up with this simple script: $cat kwin #!/bin/sh #Put this script in the PATH before the actual kwin. First set the actual path of kwin in kwinPath variable kwinPath=/bin export __GL_YIELD="USLEEP" ${kwinPath}/kwin "$@" I've named it kwin for overshadowing the actual kwin. In my case I put it in /usr/lib64/ccache because that path is before /bin in my PATH variable. So I don't need to replace kwin or set USLEEP for all programs.
(In reply to Saman from comment #101) > I'm using Fedora 21 64 bit. My KDE version is 4.14.3. When I use that script > my Menu bar disaapeared. I've come up with this simple script: What do you mean by "menubar disappeared"? plasma-desktop crashed? That'd be coincidental at best. The only other thing that script effectively does is to run nvidia-settings to check for and get rid of FXAA in case.
(In reply to Thomas Lübking from comment #102) > (In reply to Saman from comment #101) > > > I'm using Fedora 21 64 bit. My KDE version is 4.14.3. When I use that script > > my Menu bar disaapeared. I've come up with this simple script: > > What do you mean by "menubar disappeared"? plasma-desktop crashed? > That'd be coincidental at best. > The only other thing that script effectively does is to run nvidia-settings > to check for and get rid of FXAA in case. Sorry for replying too late. I've been too busy. Yes I mean plasma-desktop. I've tested again and it works as a charm! I don't know that bug was random or related to Fedora 21 because today I've updated my distro and tested it again. By the way, I didn't know kwin_gles. It's amazing. Thank you for your great work.
Is "__GL_YIELD="USLEEP" still needed on plasma 5 with nvidia ?
(In reply to jeremy9856 from comment #104) > Is "__GL_YIELD="USLEEP" still needed on plasma 5 with nvidia ? Unless you're using triple buffering: yes. Nothing has changed about the situation.
Autostart scripts work differently in KDE 5 as detailed here: https://docs.google.com/spreadsheets/d/1kLIYKYRsan_nvqGSZF-xJNxMkivH7uNdd6F-xY0hAUM. Check out the 15.04 tab, starting at row 101.
I'm using latest nvidia driver 352.21 and I've enabled TripleBuffering in xorg.conf Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 650 Ti" Option "NoLogo" Option "ModeValidation" "AllowNonEdidModes" Option "AddARGBGLXVisuals" "true" Option "TripleBuffer" "true" EndSection But I still get this warnging kwin_core: It seems you are using the nvidia driver without triple buffering You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced swaps Preferably, enable the TripleBuffer Option in the xorg.conf Device For this reason, the tearing prevention has been disabled. See https://bugs.kde.org/show_bug.cgi?id=322060 I've latest kwin from git master (f6458fa1e8e92fdf16a1acc961703d229894454c), but it was also same for previous stable versions. Any idea?
(In reply to Dāvis from comment #107) > I'm using latest nvidia driver 352.21 and I've enabled TripleBuffering in > xorg.conf > > Section "Device" > Identifier "Device0" > Driver "nvidia" > VendorName "NVIDIA Corporation" > BoardName "GeForce GTX 650 Ti" > Option "NoLogo" > Option "ModeValidation" "AllowNonEdidModes" > Option "AddARGBGLXVisuals" "true" > Option "TripleBuffer" "true" > EndSection > > But I still get this warnging > kwin_core: > It seems you are using the nvidia driver without triple buffering > You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced > swaps > Preferably, enable the TripleBuffer Option in the xorg.conf Device > For this reason, the tearing prevention has been disabled. > See https://bugs.kde.org/show_bug.cgi?id=322060 > > I've latest kwin from git master (f6458fa1e8e92fdf16a1acc961703d229894454c), > but it was also same for previous stable versions. > from journal: Extensions: shape: 0x "11" composite: 0x "4" render: 0x "b" fixes: 0x "50" randr: 0x "14" sync: 0x "31" damage: 0x "11" kwin_core: screens: 2 desktops: 4 kwin_core: Initializing OpenGL compositing kwin_core: Choosing GLXFBConfig 0x115 X visual 0x2b depth 24 RGBA 8:8:8:0 ZS 0:0 kwin_core: Initializing fences for synchronization with the X command stream kwin_core: 0x20071: Buffer detailed info: Buffer object 1 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_DYNAMIC_DRAW) will use SYSTEM HEAP memory as the source for buffer object operations. kwin_core: 0x20071: Buffer detailed info: Buffer object 1 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_DYNAMIC_DRAW) has been mapped WRITE_ONLY in SYSTEM HEAP memory (fast). kwin_core: OpenGL 2 compositing successfully initialized kwin_core: Vertical Refresh rate 75 Hz ( "primary screen" ) kwin_core: 0x20071: Buffer detailed info: Buffer object 2 (bound to GL_ELEMENT_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations. kwin_core: Successfully loaded built-in effect: "blur" kwin_core: Activation: No client active, allowing kwin_core: Successfully loaded built-in effect: "contrast" kwin_core: Session path: "/org/freedesktop/login1/session/c2" kwin_core: 0x20071: Buffer detailed info: Buffer object 3 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations. kwin_core: 0x20071: Buffer detailed info: Buffer object 7 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations. kwin_core: Triple buffering detection: "NOT available" - Mean block time: 7.79337 ms
See bug #343184 - triple buffer detection is unfortunately heuristic and something™ during plasmashell startup (only, restarting kwin during the session does not seem to expose that) causes blocking (or at least "slow") swaps :-(
There's a nice workaround that can substitute Vsync. It works quite nicely for me. https://wiki.archlinux.org/index.php/NVIDIA#Avoid_tearing_with_GeForce_500.2F600.2F700.2F900_series_cards It also allows to have lower latency when windows are being moved.
*** Bug 382831 has been marked as a duplicate of this bug. ***
My bug (382831) was marked as a duplicate of this bug, so I guess I'll throw my two cents in, here. I read through this bug -- a lot of it pretty technical and over my head. My issue with KwWin is screen-tearing while using Nvidia proprietary drivers: BEHAVIOR When I enable the screen tearing prevention in the Compositor settings, click apply, then drag & move a window around, screen tearing seems fixed for a few seconds, but the tearing returns after that. Changing the settings again produces the same result: tearing is fixed for a few seconds, but dragging and moving a window across the screen will show that screen tearing returns. EXPECTED BEHAVIOR When tearing prevention is applied (regardless of the setting or the driver used), the settings stick after you click APPLY or OK, and the tearing is gone. ISSUE DISCOVERED WHILE USING Kubuntu 17.04 x64 KDE Plasma 5.10.4 (via ppa:kubuntu-ppa/backports) KDE Frameworks 5.36.0 Qt 5.7.1 Linux kernel 4.10.0-28-generic Nvidia GTX 1080 graphics card Nvidia 384.59 proprietary driver (via ppa:graphics-drivers/ppa)
This also effects me on 16.04 Kubuntu with backports ppa and the Nvidia Cuda driver package compatible with Cuda 8.0 Adding export __GL_YIELD="USLEEP" to /etc/profile.d/kwin.sh fixed the problem although I am sure this is probably just a work around.
Hi everyone ! Sorry for reducing the signal to noise ratio here :-) I would like to ask the developers this question : I know you are, for reasons I understand and respect, against all kinds of "workarounds". However... This Nvidia + kwin tearing story has been around for ages. I mean, 5+ years is absolutely huge when it comes to software development. Power-users can look for the solution but most average users won't and will be disappointed. I witnessed this around me. If we agree it's not a good thing to build-in a workaround for this specific use case, what could be done ? Some kind or "ironic option" in KWin settings explicitly referring to the bug it's working around (as I guess it's Nvidia's fault to some extent) ? Or shouldn't at least distro providers do something about it even if it's not built in Kwin ? Sorry for my useless blabbering :-) and congrats again for all you've been doing. I love Kwin & the KDE ecosystem :-)
BTW, after re-reading myself I had the feeling my post sounded a little strange, so I apologize if it sounded a little harsh, which was absolutely not what I meant. My point was just : in my experience (4 different PCs, about 6 different Nvidia GPUs), this "export __GL_YIELD=USLEEP" workaround is always necessary to prevent tearing. Also all my Nvidia + KDE using friends had to apply it as well. I know this is bad to apply a workaround systematically... What other solution do we have ? Is it up to the distros ? Best regards
"export __GL_YIELD=USLEEP" is not needed to prevent tearing if you're using triple buffering (i.e. have it enabled in xorg.conf and define KWIN_TRIPLE_BUFFER=1). However, that option can reduce CPU consumption somewhat, so it's probably beneficial anyway. The problem is that __GL_* variables are driver-specific and should preferably be defined somewhere global (e.g. /etc/environment) so that all applications are affected, not just kwin.
Maintainer speaking: We will not add any workarounds! This has various reasons: * We lack developers with the expertise to understand the problem * We lack developers with NVIDIA cards * The last patch we did for NVIDIA specific issue caused severe issues which required an emergency release * We have no chance to properly understand what's going on due to NVIDIA driver being proprietary * If the NVIDIA driver as thee only driver needs such workarounds, NVIDIA should fix their drivers or contribute patches Last but not least: X11 is going to be feature frozen after 5.12. We are almost in feature freeze for 5.12 and given that we now have the Christmas break it's unlikely that any feature for NVIDIA is going to land before. I don't see where the devs should come from. As NVIDIA does not support Wayland the feature freeze for X11 means that we won't add any changes specific for NVIDIA any more. I'm sorry for any inconvenience that creates for you. If you have any complaints about it, please take it to NVIDIA to fix their driver or release it as open source or to do anything which would allow us to not have to workaround things.
Hi Martin. Thanks a lot for the explanation. That's completely understandable. Thanks again so much for your work !
Given that thanks to QtQuick OpenGL is now everywhere, shipping a global environment snippet might be a good idea. Otherwise see the initial posts on problems to setup libgl from kwin this way. I'd have to check over xmas whether the nvidia blob still exposes this behavior.
Hi again, I would like to raise a different but likely connected issue : on my setup, either setting USLEEP or triple buffering does fix the tearing issue. With either, performance is wonderful in 3D games (1060 GTX + i3 CPU). However, what boggles me is the responsiveness and smoothness of kwin itself (I mean the desktop : resizing windows, raising menus etc.) is inconsistent. The same animation can be butter smooth then the next time appears jerky. It's actually always happened as long as I can remember when using kwin and Nvidia cards. It seems slightly better with triple buffer, but as I don't want to add input lag in games... I kinda supposed it was a standard behaviour, but I noticed that even old machines with Intel HD Graphics would show a perfectly constant 60 FPS in kwin. I tried to remove my Nvidia and use the built-in Intel HD and the desktop felt perfect. It even seemed to feel smoother with the Nouveau driver. I'm pretty sure all this is also related to the difficulty to work with the proprietary driver. As you devs are aware of the vsync issue, what's your opinion on the "desktop smoothness" issue ? Thanks again :-) Best regards & happy new year :)
> As you devs are aware of the vsync issue, what's your > opinion on the "desktop smoothness" issue ? > Honestly, I stopped caring about nvidia specific problems years ago. To me Nvidia stopped to matter the day I switched my developer systems to Wayland. They run Intel as NVIDIA doesn't support gbm. Due to that even if I wanted to, I would not be able to install an Nvidia card and test issues. Nowadays my opinion is that it is the choice of the consumers to decide whether they want nvidia with all that problems or not. They as consumers can bring nvidia to fix issues, but not we devs. Every crash in the Nvidia driver gets closer with please report to Nvidia. If all users do, it might change something.
Comment #120 sounds more like a client related issue anyway - resizing (QtQuick) GL contexts is PITA - at least on nvidia but OPenGL wasn't drafted for this behavior anyway.
It would be cool if we could dynamically load the nvidia binary driver(like nvidia prime but switching with the open and closed driver quickly) when ever we want so we can have the best of both worlds and not even worry about any of those things. Screen tearing is probably the one thing I hate about KDE/Plasma right now compared with other solutions that for whatever reason do not have this issue. On 01/14/2018 12:22 PM, Mahendra Tallur wrote: > https://bugs.kde.org/show_bug.cgi?id=322060 > > --- Comment #120 from Mahendra Tallur <mahen@free.fr> --- > Hi again, > > I would like to raise a different but likely connected issue : > > on my setup, either setting USLEEP or triple buffering does fix the tearing > issue. With either, performance is wonderful in 3D games (1060 GTX + i3 CPU). > > However, what boggles me is the responsiveness and smoothness of kwin itself (I > mean the desktop : resizing windows, raising menus etc.) is inconsistent. The > same animation can be butter smooth then the next time appears jerky. It's > actually always happened as long as I can remember when using kwin and Nvidia > cards. > > It seems slightly better with triple buffer, but as I don't want to add input > lag in games... > > I kinda supposed it was a standard behaviour, but I noticed that even old > machines with Intel HD Graphics would show a perfectly constant 60 FPS in kwin. > I tried to remove my Nvidia and use the built-in Intel HD and the desktop felt > perfect. It even seemed to feel smoother with the Nouveau driver. > > I'm pretty sure all this is also related to the difficulty to work with the > proprietary driver. As you devs are aware of the vsync issue, what's your > opinion on the "desktop smoothness" issue ? > > Thanks again :-) > Best regards & happy new year :) >
Impossible. nvidia and nouveau are incompatible on the kernel layer and act on the same HW. It's no way near the optimus condition.
@Martin Flöser : thanks, again for your replies, your work, as well as your blog posts and technical decisions. This is most appreciated. I will clearly stop bothering you about Nvidia. As a consumer, can anyone here tell me what is the state of the discussion with Nvidia, what are they aware of ? Did some users here get in touch with them through their forums or through their bugtracker, if any ? Any link so I could add my data to existing reports ? I cannot believe they are not at least a little concerned. It's crazy that, for instance, when opening any app while a video is playing underneath makes the video stutter, which doesn't open with an Intel HD Graphics. Or that desktop effects randomly stutter, etc.
(In reply to Mahendra Tallur from comment #125) > I cannot believe [nvidia] are not at least a little concerned. I can well believe that nvidia are unconcerned, desktop GNU/Linux is going to be such a small share of their market that there probably isn't the profit in making things work and culturally nvidia appear to be hostile to F/OSS. The correct "fix" is probably to swicth to AMD/Intel, which will basically be mandated anyway as more distros switch to Wayland. Which kinda sucks for those of us on nvidia, but life happens. At the moment I find that keeping desktop effects enabled & using '__GL_YIELD="USLEEP"' makes the problems mostly go away.
Nvidia may not care specifically about KDE/Plasma to the extent you want, but saying it doesn't care is clearly wrong. They do care about Linux. In fact Nvidia knows its cards need to work well on Linux for ML and various other tasks. I have over 100 steam games that work well on Linux and Nvidia has made fixes needed at times. When I launch a game Plasma freezes for a moment and then all advanced graphics appear frozen. My wallpaper even reverts back to some previous datetime's image. Not saying this is for sure the KDE Communities' fault. I am just saying we should clean up our own house before casting shade on another's. I understand people getting pissed by things not working, but how you get things done is by providing paths for people to get involved and help. If they disagree you don't just put your head in the sand, or give them the finger. What you should do is talk about it and if Nvidia truly does something messed up let the community know, but be careful because the Linux zealots will be overly obtuse. I am all for GNU/Linux, but I think we should all be pragmatic while sticking to our guns. On 02/01/2018 05:29 AM, Twisted Lucidity wrote: > https://bugs.kde.org/show_bug.cgi?id=322060 > > --- Comment #126 from Twisted Lucidity <lucidlytwisted@gmail.com> --- > (In reply to Mahendra Tallur from comment #125) >> I cannot believe [nvidia] are not at least a little concerned. > I can well believe that nvidia are unconcerned, desktop GNU/Linux is going to > be such a small share of their market that there probably isn't the profit in > making things work and culturally nvidia appear to be hostile to F/OSS. > > The correct "fix" is probably to swicth to AMD/Intel, which will basically be > mandated anyway as more distros switch to Wayland. Which kinda sucks for those > of us on nvidia, but life happens. > > At the moment I find that keeping desktop effects enabled & using > '__GL_YIELD="USLEEP"' makes the problems mostly go away. >
> Plasma freezes for a moment and then all advanced graphics appear frozen. Sounds just as if steam blocks/disables the compositor? > My wallpaper even reverts back to some previous datetime's image. That I cannot explain at all. > when opening any app while a video is playing underneath makes the video stutter This is either by the opening animation (crucial?) and the video running at different FPS or (rather?) I/O related (in this case different "apps" will have different impact) > which doesn't open with an Intel HD Graphics Try to enforce the full composition pipeline (nvidia-settings, will draw more energy)
I don't think that is the case because it happens when using CUDA outside of games in some projects I work on. On 02/01/2018 10:16 AM, Thomas Lübking wrote: > https://bugs.kde.org/show_bug.cgi?id=322060 > > --- Comment #128 from Thomas Lübking <thomas.luebking@gmail.com> --- >> Plasma freezes for a moment and then all advanced graphics appear frozen. > Sounds just as if steam blocks/disables the compositor? > >> My wallpaper even reverts back to some previous datetime's image. > That I cannot explain at all. > >> when opening any app while a video is playing underneath makes the video stutter > This is either by the opening animation (crucial?) and the video running at > different FPS or (rather?) I/O related (in this case different "apps" will have > different impact) > >> which doesn't open with an Intel HD Graphics > Try to enforce the full composition pipeline (nvidia-settings, will draw more > energy) >
(In reply to Thomas Lübking from comment #128) > > My wallpaper even reverts back to some previous datetime's image. > That I cannot explain at all. This is reproducible by simply suspending compositing. After an undetermined amount of time, the non-composited desktop will freeze its appearance and will not get updated anymore. New applications don't appear on the task manager bar, the system tray doesn't get updated, basically it's frozen to some state in the past. Resuming compositing again makes it work correctly. Suspending compositing after that makes it revert to the same old frozen state. For example, if it got frozen and the time on the systray says "20:00", but the time now is 21:00, then suspending and resuming compositing makes it display 20:00 - 21:00 - 20:00 - 21:00, etc, as you suspend/resume compositing.
That sounds like exactly what I experience. I guess the Nvidia driver is doing this? On 02/01/2018 10:28 AM, Nikos Chantziaras wrote: > https://bugs.kde.org/show_bug.cgi?id=322060 > > --- Comment #130 from Nikos Chantziaras <realnc@gmail.com> --- > (In reply to Thomas Lübking from comment #128) >>> My wallpaper even reverts back to some previous datetime's image. >> That I cannot explain at all. > This is reproducible by simply suspending compositing. After an undetermined > amount of time, the non-composited desktop will freeze its appearance and will > not get updated anymore. New applications don't appear on the task manager bar, > the system tray doesn't get updated, basically it's frozen to some state in the > past. > > Resuming compositing again makes it work correctly. Suspending compositing > after that makes it revert to the same old frozen state. For example, if it got > frozen and the time on the systray says "20:00", but the time now is 21:00, > then suspending and resuming compositing makes it display 20:00 - 21:00 - 20:00 > - 21:00, etc, as you suspend/resume compositing. >
> and the time on the systray says "20:00", but the time now is 21:00 That's not what you said :-P https://bugs.kde.org/show_bug.cgi?id=353983
huh? On 02/01/2018 10:57 AM, Thomas Lübking wrote: > https://bugs.kde.org/show_bug.cgi?id=322060 > > --- Comment #132 from Thomas Lübking <thomas.luebking@gmail.com> --- >> and the time on the systray says "20:00", but the time now is 21:00 > That's not what you said :-P > > https://bugs.kde.org/show_bug.cgi?id=353983 >
(In reply to Thomas Lübking from comment #132) > > and the time on the systray says "20:00", but the time now is 21:00 > That's not what you said :-P > > https://bugs.kde.org/show_bug.cgi?id=353983 I don't see any post from me on that bug :-P
Regarding the Force(Full)CompositionPipeline VERSUS setting GL_YIELD to USLEEP. I used to use the latter and not the former because forcing the composition pipeline may slow down the system / induce more power consumption etc. or so I thought. With any of the solutions, tearing is fixed. When using USLEEP I felt there was a performance impact on the desktop : for instance, it's obvious when moving a window to the edge of the screen to maximize it : it's jerky when using USLEEP with an nvidia card (but it's butter smooth with my Intel HD Graphics). I figured out this animation is also smooth when using Force(full)compositionpipeline. I'm not 100% sure about the other effects, but it seems better. General desktop usage still seems slightly less smooth than with open source drivers, but it's better than when using USLEEP. As for forcing the FULL composition pipeline or not, I don't know what actual difference it makes. (sorry for adding more noise :-)
The full composition pipeline indirects rendering in the driver. It's implicit when eg. xrandr scaling the output or so. As for the "not what you said" comment, I focused on the "old wallpaper" thing. I didn't read the description as "all of plasmashell freezes to an old buffer"
I second Ryein & Nikos regarding the previous-buffer issue. Also I managed but that's offtopic to kill kwin by switching back and forth from a game that interrupted composition and the desktop. (that's probably another story and I'll check with the new plasma) Thanks for taking the time to reply, Thomas. I see another issue when using Force(Full)CompositionPipeline : about one time out of two, I get a black desktop (no plasmoid / background) until the next reboot. I get the bottom panel and everything else works though. Also, even though the value is set in xorg.conf, it add an 15 seconds delay on a black screen (after X start / before plasma appears).
Oops forgot this to my previous comment (when I get a black desktop in 50% of the cases when forcing the pipeline) : the desktop is black but when I move the mouse over it and right click, the menu is related to the items that were supposed to be there ; for instance, the properties of a specific desktop icon in a specific folder view...
(In reply to Mahendra Tallur from comment #137) > > I see another issue when using Force(Full)CompositionPipeline : about one > time out of two, I get a black desktop (no plasmoid / background) until the > next reboot. This may be a manifestation of the infamous "black textures" problem. https://bugs.kde.org/show_bug.cgi?id=386752 https://devtalk.nvidia.com/default/topic/1026340/linux/black-or-incorrect-textures-in-kde
BTW, in that Nvidia thread, a driver developer says: > The claim that __GL_YIELD=usleep is required points at an application bug, possibly a race condition due to missing synchronization. So if anyone is fixing tearing by that option (i.e. the tearing is present without it and not present with it), then, according to Nvidia, the bug is in Kwin. On my system though, __GL_YIELD=usleep has no influence on tearing, only on CPU consumption.
As pointed out in the original report, this has *never* been about vsync functionality per se. The other yield methods (at that time, I dropped KDE for other reasons) caused ridiculous CPU load (when glSwapBuffers block, ie. w/ double buffered vsync). So the code just checks for the environment to be set and disables vsync if it's not and the triple buffering is guessed (there is/was no way to query it) off. Tuhs it would be good if one could ensure kwin to be loaded w/ this setting, but setting the environment in the process is too late.
As suggested, I finally created a thread on the NVIDIA forum where an NVIDIA dev frequently replies... A message in a bottle !
As suggested, I finally created a thread on the NVIDIA forum where an NVIDIA dev frequently replies... A message in a bottle ! https://devtalk.nvidia.com/default/topic/1029568/linux/the-situation-on-kde-kwin-plasma-performance/
(In reply to Mahendra Tallur from comment #143) > As suggested, I finally created a thread on the NVIDIA forum where an NVIDIA > dev frequently replies... A message in a bottle ! > > https://devtalk.nvidia.com/default/topic/1029568/linux/the-situation-on-kde- > kwin-plasma-performance/ That link doesn't seem to work. I can't see any such thread there.
It does exist, for some reason it seems to be hidden. Maybe NVIDIA has to approve of it first. Sorry for the inconvenience. It seems many other users have the same issue on this forum.
@Nikos : the link is available now : https://devtalk.nvidia.com/default/topic/1029568/linux/the-situation-on-kde-kwin-plasma-performance/
What I don't get about this long-standing bug is, that under GNOME everything works out of the box - no tearing. And without having to mess with any configuration. This suggest to me that the bug is somewhere in kwin.
(In reply to Peter Eszlari from comment #147) > What I don't get about this long-standing bug is, that under GNOME > everything works out of the box - no tearing Feel free to use GNOME if it gives the better experience for you. Unfortunately it is not possible to draw any conclusions from the fact that it works for you on GNOME.
Or maybe just read https://bugs.kde.org/show_bug.cgi?id=322060#c141 about the nature of the "bug" and the state of the resolution ...
|OT : to users] Hi ! I'm sorry for adding noise, I would just like to express a piece of advice to *users* like me. As it's a longstanding problem affecting many people. 1) there are technical considerations that we cannot grasp ; but numerous efforts were put in the past, and the very nature of the Nvidia drivers makes it difficult to solve this problem. Workaround were attempted in the past with no satisfying result. 2) You can get a semi-acceptable state by applying work-around but it's never that great. (disabling automatic composition interruption ; enabling triple buffering) 3) it's not that great either under Gnome. You still get sub-optimal performance on the desktop (I tried & compared Gnome Shell performance with an Intel HD) ; you still have to apply tweaks for tearing in some apps (Totem, browsers for instance). It's acceptable, but that's also a compromise... 4) believe me, the difference in terms of general usability is so huge, it's worth downgrading and switching to a different GPU vendor, if you're not a big gamer. I settled with a very slow and cheap AMD RX550. Gaming is OK but desktop performance is stellar (as it is with Intel HD drivers). No more tearing, no more KDE panel crash, no workaround. You also benefit from : open source drivers, Wayland session, Lakka now works and eventually from the realtime-kwin ;-), no tweak, constant 60 FPS desktop...
(In reply to Martin Flöser from comment #148) > Feel free to use GNOME if it gives the better experience for you. > Unfortunately it is not possible to draw any conclusions from the fact that > it works for you on GNOME. But than I would have deal with a crippled desktop environment. What I will do instead is, I will buy an AMD card. I just don't think such a out of the box experience for owners of Nvidia cards is good for KDE.
(In reply to Peter Eszlari from comment #151) > (In reply to Martin Flöser from comment #148) > > Feel free to use GNOME if it gives the better experience for you. > > Unfortunately it is not possible to draw any conclusions from the fact that > > it works for you on GNOME. > > But than I would have deal with a crippled desktop environment. What I will > do instead is, I will buy an AMD card. I just don't think such a out of the > box experience for owners of Nvidia cards is good for KDE. Please tell Nvidia that you buy a card of a different vendor because their driver sucks. Nvidia needs to know that it is harming their business. We cannot and do not want to fix Nvidia's driver issues.
(In reply to Martin Flöser from comment #152) > Nvidia needs to know that it is harming their business. I think they won't care much, because the Linux customers that Nvidia cares about are running Redhat Enterprise with GNOME.
I guess this bug can be closed now: https://phabricator.kde.org/D19867
Marking as fixed per latest comment
For anyone coming here through a web search, there's a KWin fork that fixed all issues for me: https://github.com/tildearrow/kwin-lowlatency No more frame skipping or stutter, no more lag, works great with modern, better-than-60Hz displays.
(In reply to Peter Eszlari from comment #154) > I guess this bug can be closed now: > > https://phabricator.kde.org/D19867 Good day, how can i apply this resolution? what is the resolution
I believe it works now without tripleBuffer quite well on kwin 5.15.5 on Gentoo-GNU/Linux, just to dump a version number here.