Bug 322060 - Synced swapping on double buffered nvidia GPUs cause high CPU load
Summary: Synced swapping on double buffered nvidia GPUs cause high CPU load
Status: RESOLVED FIXED
Alias: None
Product: kwin
Classification: Plasma
Component: scene-opengl (show other bugs)
Version: git master
Platform: Other Linux
: NOR normal
Target Milestone: ---
Assignee: KWin default assignee
URL:
Keywords:
: 323646 323817 323835 324049 324190 324437 324742 324770 326264 328781 331720 341166 382831 (view as bug list)
Depends on:
Blocks:
 
Reported: 2013-07-07 07:36 UTC by Thomas Lübking
Modified: 2019-06-03 19:48 UTC (History)
48 users (show)

See Also:
Latest Commit:
Version Fixed In:
Sentry Crash Report:


Attachments
KWin loader script (3.52 KB, text/plain)
2013-08-17 18:37 UTC, Thomas Lübking
Details
kwin debug output (9.55 KB, text/plain)
2013-08-18 15:03 UTC, S. Christian Collins
Details
X.org log file (20.08 KB, text/x-log)
2013-08-18 15:03 UTC, S. Christian Collins
Details
my xorg.conf (1.54 KB, application/octet-stream)
2013-08-18 15:06 UTC, S. Christian Collins
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Thomas Lübking 2013-07-07 07:36:42 UTC
The reason is that nvidia performs a busy wait.
Only setting __GL_YIELD="USLEEP" avoids this
The default and especially also "NOTHING" will boost CPU usage for nothing. Also "NOTHING" will rather steal cpu slices from KWin's core functionality.

The next issue i found is that libkwinnvidiahack is ineffective - it's loaded and executed but (perhaps due to libkdeinit_kwin?) apparently too late since setting the env there has absolutely no impact while setting it on the terminal allows me to control the CPU load for sure.

The additional load is not minor, but factor 5 when playing a video here.

Tasks:
1. figure how to make libnvidiahacks operative again
2. set __GL_YIELD to USLEEP and maybe some others like __GL_FSAA_MODE and __GL_LOG_MAX_ANISO (both to 0)

Reproducible: Always
Comment 1 Thomas Lübking 2013-08-02 20:30:08 UTC
Git commit 031d290f9a07c533daa80547424e6d1c1b9dac5b by Thomas Lübking.
Committed on 23/07/2013 at 20:34.
Pushed by luebking into branch 'KDE/4.11'.

prevent yield/swap cpu overhead on nvidia
REVIEW: 111663

M  +12   -0    kwin/eglonxbackend.cpp
M  +12   -0    kwin/glxbackend.cpp

http://commits.kde.org/kde-workspace/031d290f9a07c533daa80547424e6d1c1b9dac5b
Comment 2 Thomas Lübking 2013-08-05 19:01:59 UTC
Git commit a6b8844eacc7734cd623fe40ff2114009b583165 by Thomas Lübking.
Committed on 03/08/2013 at 14:17.
Pushed by luebking into branch 'KDE/4.11'.

remove nvidiahack lib

1. it apparently is ineffective
2. if it was effective, it's current behavior would be not exactly helpful
   (sets __GL_YIELD to NOTHING, causing busy waits on doublebuffer swapping)
3. it does for sure pollute the doublebuffer/usleep detection (setenv is set to override),
   ie. the overehad detection code gets a different opinion on __GL_YIELD than libGL

REVIEW: 111858

M  +0    -14   kwin/CMakeLists.txt
D  +0    -52   kwin/nvidiahack.cpp

http://commits.kde.org/kde-workspace/a6b8844eacc7734cd623fe40ff2114009b583165
Comment 3 Thomas Lübking 2013-08-17 18:27:21 UTC
*** Bug 323646 has been marked as a duplicate of this bug. ***
Comment 4 Thomas Lübking 2013-08-17 18:37:44 UTC
Created attachment 81758 [details]
KWin loader script

Attached a loader shell script that sets the require enviroment vars, tries to disable FXAA and launches kwin.

Place it somewhere up in $PATH (eg. ~/bin often is) to shadow the kwin binary.
Comment 5 S. Christian Collins 2013-08-18 05:53:37 UTC
The Kwin loader script didn't work for me, causing windows to have no decorations or compositing. Adding 'export __GL_YIELD="USLEEP"' to /etc/profile fixed bug 323646 (marked as a duplicate of this bug) for me.
Comment 6 Thomas Lübking 2013-08-18 07:00:11 UTC
Sounds as if the real kwin binary is not found in the exec paths, ie. kwin not started?

> IFS=':' EXEC_PATHS=`kde4-config --path exe`
> for BINARY in ${BINARIES}; do
>    for EXEC_PATH in ${EXEC_PATHS}; do
>        if [ "${EXEC_PATH}${BINARY}" = "$THIS_BIN" ]; then
>            continue;
>        fi
>        if [ -e "${EXEC_PATH}${BINARY}" ]; then
>            echo "$THIS_BIN started ${EXEC_PATH}${BINARY}" > /tmp/kwin.log

About triple buffer detection: if you've enabled it for sure and it works for sure, "export KWIN_TRIPLE_BUFFER=1" instead.

Does the "kwin (1212)" debug (run kdebugdialog) output indicate that
  "Triple buffering detection: NOT available"?
Comment 7 S. Christian Collins 2013-08-18 15:03:02 UTC
Created attachment 81771 [details]
kwin debug output

I have no idea how to verify whether or not triple-buffering is indeed enabled, but I do have it as an option in my xorg.conf file. The X.org log file at least acknowledges that the option is recognized, but I don't see anything beyond that to indicate that triple-buffering is actually active. A Google search for a way to verify this has turned up nothing.

I have run kwin with the debug info enabled and have attached the output as well as my Xorg log file.
Comment 8 S. Christian Collins 2013-08-18 15:03:51 UTC
Created attachment 81772 [details]
X.org log file
Comment 9 S. Christian Collins 2013-08-18 15:06:02 UTC
Created attachment 81773 [details]
my xorg.conf

here's my xorg.conf as well
Comment 10 Thomas Lübking 2013-08-18 15:11:14 UTC
triple buffering is enabled by the driver.
The KWin output is too short - it takes a few seconds (many frames) to figure whether triple buffering is enabled or not (there seems no legal way, so we just measure how long swapping takes - fast return means swapping)

Does the screen actually run at 100Hz (CRT?)
Comment 11 S. Christian Collins 2013-08-18 15:12:36 UTC
Well, after I copied the kwin debug output for you, I got this:

kwin(4569) KWin::SwapProfiler::end: Triple buffering detection: "NOT available"  - Mean block time: 4.39506 ms

Looks like I just didn't wait long enough.

Yes, I had to create metamodes to get ideal refresh rates from my CRT. It does run at 100 Hz in this resolution (1280 x 960). Need to have high refresh for my 3D shutter glasses :)
Comment 12 S. Christian Collins 2013-08-18 15:14:13 UTC
Not to mention, kwin running buttery-smooth at 100 Hz is a thing of beauty to behold :)
Comment 13 Matthias Dahl 2013-08-18 15:25:16 UTC
I've been bitten by this bug as well.

This worked just fine up to the final 4.11 release. Now I either have to set KWIN_TRIPLE_BUFFER=1 or set the tearing prevention to None and back to Automatic (or whatever I want) before it becomes active again after the login.

Triple buffering is active and set, by the way.
Comment 14 Matthias Dahl 2013-08-18 15:30:00 UTC
Seems like this is a race condition of some kind. After kde is up and running, doing a "kwin --replace" will fix the problem as well and properly activate the tearing prevention (w/o the KWIN_TRIPLE_BUFFER set):

kwin(8121) KWin::SwapProfiler::end: Triple buffering detection: "Available"  - Mean block time: 0.234613 ms
Comment 15 S. Christian Collins 2013-08-18 15:31:08 UTC
I replaced 'export __GL_YIELD="USLEEP"' with 'export KWIN_TRIPLE_BUFFER=1' in my /etc/profile, and it works. I get full v-sync on login.

To answer your previous question about the kwin loader script file, I had placed the script in /usr/local/bin. The true kwin binary is in /usr/bin.
Comment 16 Thomas Lübking 2013-08-18 19:29:47 UTC
(In reply to comment #14)
> running, doing a "kwin --replace" will fix the problem as well and properly
> activate the tearing prevention

This would mean that buffer swapping takes more time under load, could be due to X11 or the driver cannot flip

We better delay the measuring...

Just to be sure:
you've flipping allowed in nvidia-settings ("OpenGL Settings" page)?
Comment 17 Thomas Lübking 2013-08-18 19:42:38 UTC
(In reply to comment #15)
> I replaced 'export __GL_YIELD="USLEEP"' with 'export KWIN_TRIPLE_BUFFER=1'
> in my /etc/profile, and it works. I get full v-sync on login.

Notice that if swapping indeed blocks for you (for what reason ever, but the measured time suggests such) this might show inferior performance to exporting __GL_YIELD="USLEEP" instead.

1. because nvidia then would perform a busy wait (causing CPU load)
2. because kwin would approach the swap at rather random times and often miss frames, resp. waste a lot of time waiting for the sync.
 
> To answer your previous question about the kwin loader script file, I had
> placed the script in /usr/local/bin. The true kwin binary is in /usr/bin.
Did you get any output to /tmp/kwin.log?
I could only assume that /usr/bin is not in the output of "kde4-config --path exe" ... or /bin/sh is not bash and i've some bash/zsh slang in there ;-)
(try replacing the shebang with #!/bin/bash)
Comment 18 S. Christian Collins 2013-08-18 21:22:37 UTC
(In reply to comment #17)
> Notice that if swapping indeed blocks for you (for what reason ever, but the
> measured time suggests such) this might show inferior performance to
> exporting __GL_YIELD="USLEEP" instead.

Okay, I have switched back to __GL_YIELD="USLEEP" and kwin's CPU usage when dragging a window quickly in circles has gone down slightly (from 3-4% down to 1-2%). Any reason why both the "__GL_YIELD" and "KWIN_TRIPLE_BUFFER" flags shouldn't be set?

> Did you get any output to /tmp/kwin.log?
> I could only assume that /usr/bin is not in the output of "kde4-config
> --path exe" ... or /bin/sh is not bash and i've some bash/zsh slang in there
> ;-)
> (try replacing the shebang with #!/bin/bash)

Replacing the first line with "#!/bin/bash" fixed it for me.
Comment 19 Thomas Lübking 2013-08-18 22:06:27 UTC
(In reply to comment #18)

> to 1-2%). Any reason why both the "__GL_YIELD" and "KWIN_TRIPLE_BUFFER" flags shouldn't be set?

No. USLEEPing is othogonal to triple buffering (or even it's detection) - and probably the better choice to distribute the CPU between KWin and the GL driver.

> Replacing the first line with "#!/bin/bash" fixed it for me.
I guess /bin/sh is dash?
Comment 20 S. Christian Collins 2013-08-18 23:17:45 UTC
(In reply to comment #19)
> (In reply to comment #18)
> 
> > to 1-2%). Any reason why both the "__GL_YIELD" and "KWIN_TRIPLE_BUFFER" flags shouldn't be set?
> 
> No. USLEEPing is othogonal to triple buffering (or even it's detection) -
> and probably the better choice to distribute the CPU between KWin and the GL
> driver.

Sorry, I don't understand your answer, particularly the use of the word "orthogonal". It seems that you are saying that the two methods don't have anything to do with one another, and the USLEEP method will give superior performance to triple-buffering.

> > Replacing the first line with "#!/bin/bash" fixed it for me.
> I guess /bin/sh is dash?

Yes, it is.
Comment 21 Thomas Lübking 2013-08-19 12:17:37 UTC
(In reply to comment #20)
> It seems that you are saying that the two methods don't have
> anything to do with one another

Yes.


> and the USLEEP method will give superior performance to triple-buffering.
No, it's more complex than this.

If triple buffering works, that means the driver doesn't have to wait for a swap.
"No wait" means "no busy wait" and that means no pointless CPU load, regardless of the yielding strategy.
It also means we don't miss a swap, nor waste any time (w/ or w/o cpu load) on waiting for it (free for all the other action the WM has to do - like repositioning a window when you move it around)

=> triple buffering is a good idea if your GPU has sufficient VRAM (what's noadays usually the case)

Selecting USLEEP as yielding strategy means that the driver neither occupies too many CPU slices ("stealing them from the actual process" - as with the "NOTHING" strategy) nor may loose the CPU for unpredictable amounts of time (as with the default sched_yield() call)

=> For the WM scenario, i consider USLEEP to be the better yielding strategy - regardless of that it's actually required to prevent the busy wait on double buffering setups.
Comment 22 Matthias Dahl 2013-08-19 13:15:21 UTC
(In reply to comment #16)

> This would mean that buffer swapping takes more time under load, could be
> due to X11 or the driver cannot flip

Still, there is something more going on, if you ask me. A simple "kwin --replace" works 9 out of 10 times but not always - even if the system is totally idle. Something is racy. In that rare case it does not work initially, it always works the second time... and going through the system settings has never failed to work at all.

> Just to be sure:
> you've flipping allowed in nvidia-settings ("OpenGL Settings" page)?

Yes. Even though nowadays I don't use nvidia-settings for anything anymore but I checked and put it in the Autostart to load the configuration... but it made no difference at all.

One quick note about the __GL_YIELD="USLEEP": With today's kernels, usleep(0) is a noop and has been since the introduction of the HPETs, afaik. Put differently, the thread will run off its time slice just like there was no usleep(0) at all.
Comment 23 Mahendra Tallur 2013-08-19 13:34:56 UTC
Hi ! I would just like to confirm that just enabling triple buffering is not enough for not getting tearing (have to switch openGL mode, for instance -- just disabling-re-enabling desktop effects is not enough either).

(I have intermittent tearing with nvidia proprietary drivers since 4.11.0)
Comment 24 Thomas Lübking 2013-08-21 09:42:02 UTC
*** Bug 323817 has been marked as a duplicate of this bug. ***
Comment 25 Thomas Lübking 2013-08-21 14:45:20 UTC
*** Bug 323835 has been marked as a duplicate of this bug. ***
Comment 26 modellbaukeller 2013-08-22 21:28:10 UTC
Confirmed that TripleBuffer isn't enough at all, that alone doesn't prevent KWin from tearing massively (never seen tearing in KDE before 4.11), just like mentioned in #23.
Comment 27 Thomas Lübking 2013-08-22 21:42:46 UTC
(In reply to comment #26)
> Confirmed that TripleBuffer isn't enough at all

Because the setting is casually misdetected at startup (there's no legal way to ask for it) and then tearing prevention is forcefully turned off.
That's already common sense.

See comment #15 on how to workaround that.
Comment 28 Thomas Lübking 2013-08-26 07:23:46 UTC
*** Bug 324049 has been marked as a duplicate of this bug. ***
Comment 29 Thomas Lübking 2013-08-27 06:49:42 UTC
This patch defers the triple buffering detection, i'd appreciate if anyone suffering from the misdetection on login could apply and test it, thanks.

https://git.reviewboard.kde.org/r/112308/
Comment 30 svenssonsven8 2013-08-28 17:49:30 UTC
I haven't had the chance to try the fix from comment #29, but I can inform that the tearing is not limited to nVidia only. I did an upgrade to 4.11 on my laptop that has optimus graphics (nVidia/Intel) and I get the same tearing on login with Intel card enabled only:

user@user-laptop:~$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
01:00.0 VGA compatible controller: NVIDIA Corporation GF108M [GeForce GT 525M] (rev ff)
user@user-laptop:~$

However, in this case, changing the compositing type to OpenGL 3.1 and back to OpenGL 2.0 does not help. What helps in this case is changing the tearing prevention (I selected "Re-use screen content"). The tearing immediately dissapeared.

Just wanted to provide more info. I am not that versed in how to apply the path so I need some more (newbie) info on that.

Kind Regards,
Veroslav
Comment 31 Felix Michel 2013-08-28 17:55:49 UTC
me too, i would also need some more newbie info to apply the fix.
Comment 32 Thomas Lübking 2013-08-28 19:19:59 UTC
(In reply to comment #30)
> I haven't had the chance to try the fix from comment #29, but I can inform
> that the tearing is not limited to nVidia only. 
unrelated.

> What helps in this case is changing the tearing prevention
The default strategy for intel chips is "only when cheap", what for the moment means "only when large scree portions are repainted" - like fullscreen video playback or scrolling in a maximized browser.
> (I selected "Re-use screen content")
Do not use that strategy with mesa drivers! From all we can say, this causes a HUUUGE overhead.
Pick "force fullscreen repaints" if desired, then restart kwn (in doubt, log out and in - restarting the compositor will usually NOT be sufficient)

> Just wanted to provide more info. I am not that versed in how to apply the
> path so I need some more (newbie) info on that.
The patch has only tearing impact on nvidia gpu's, yet might get you a more responsive compositor on others.
To apply, you need to pick the sourcecode of kwin, apply the patch with "patch < patch.diff" and then compile kwin. If you generally feel up to that and just require info on details, please all back.
Comment 33 S. Christian Collins 2013-08-28 22:17:45 UTC
I tested the patch, but it doesn't solve the problem on my system. Triple-buffering is still not detected correctly during login, but is detected correctly when running "kwin --replace &" after login has completed.

For the record, my system:
OS: Kubuntu 12.04 64-bit w/ KDE SC 4.11.0 (from Kubuntu backports PPA)
Motherboard: ASRock X58 Extreme3 (Intel X58 chipset)
CPU: Intel Core i7 930 (2.8 GHz quad-core)
RAM: 12GB DDR3
Video: NVIDIA GeForce 7900 GS w/ 256 MB RAM (PCI Express)
Sound Card #1: Sound Blaster Audigy 2 ZS Gold
Sound Card #2: Echo Gina3G
Linux Kernel: 3.5.0-39-generic
NVIDIA video driver: 304.88
Screen Resolution: 1280 x 960
Comment 34 S. Christian Collins 2013-08-28 22:21:30 UTC
Whoops, I stand corrected... I didn't wait long enough. After running "kwin --replace &", vsync was initially enabled, but then it became disabled and I got the following output:

kwin(4163) KWin::SwapProfiler::end: Triple buffering detection: "NOT available"  - Mean block time: 4.51748 ms
kwin(4163) KWin::GlxBackend::present: 
It seems you are using the nvidia driver without triple buffering
You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced swaps
Preferably, enable the TripleBuffer Option in the xorg.conf Device
For this reason, the tearing prevention has been disabled.
Comment 35 S. Christian Collins 2013-08-28 22:25:39 UTC
Okay, please ignore my previous two comments until I've done more testing.
Comment 36 S. Christian Collins 2013-08-28 22:32:20 UTC
Alright, so it seems that *sometimes* it will properly detect triple-buffering and sometimes not. If I restart kwin and then just wait while doing nothing, I will get this:

kwin(4870) KWin::SwapProfiler::end: Triple buffering detection: "Available"  - Mean block time: 0.673479 ms

... however, if I restart kwin, and then start dragging windows around the screen, I'll get this:

kwin(4876) KWin::SwapProfiler::end: Triple buffering detection: "NOT available"  - Mean block time: 9.76569 ms
kwin(4876) KWin::GlxBackend::present: 
It seems you are using the nvidia driver without triple buffering
You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced swaps
Preferably, enable the TripleBuffer Option in the xorg.conf Device
For this reason, the tearing prevention has been disabled.
See https://bugs.kde.org/show_bug.cgi?id=322060
Comment 37 Thomas Lübking 2013-08-28 23:28:27 UTC
*** Bug 324190 has been marked as a duplicate of this bug. ***
Comment 38 Thomas Lübking 2013-08-28 23:35:36 UTC
(In reply to comment #36)
> Alright, so it seems that *sometimes* it will properly detect
> triple-buffering and sometimes not. If I restart kwin and then just wait
> ...
> ... however, if I restart kwin, and then start dragging windows around the
> screen, I'll get this:
> 
> kwin(4876) KWin::SwapProfiler::end: Triple buffering detection: "NOT
> available"  - Mean block time: 9.76569 ms

I got it (USLEEP) up to only 1.40989 ms, but in general can confirm the effect.
Either we approach with too many swap calls (and the driver blocks the third frame) or the simply swap function load under stress takes too long... gonna check, but i fear the latter :-(
Comment 39 t.jp 2013-08-29 12:13:15 UTC
Hi,
is this really the same bug? I don't see abnormal CPU load on my system. No
matter if vsync works or not. All my CPUs are at rougly 0% shortly after
startup during the time where everything is synced at 60 FPS and even when
it rises to 100 FPS and screen tearing appears. My system is an Intel i7
2600K 4.5 Ghz with NVIDIA GTX 580.

Kind Regards,

Tim


2013/8/29 Thomas Lübking <thomas.luebking@gmail.com>

> https://bugs.kde.org/show_bug.cgi?id=322060
>
> --- Comment #38 from Thomas Lübking <thomas.luebking@gmail.com> ---
> (In reply to comment #36)
> > Alright, so it seems that *sometimes* it will properly detect
> > triple-buffering and sometimes not. If I restart kwin and then just wait
> > ...
> > ... however, if I restart kwin, and then start dragging windows around
> the
> > screen, I'll get this:
> >
> > kwin(4876) KWin::SwapProfiler::end: Triple buffering detection: "NOT
> > available"  - Mean block time: 9.76569 ms
>
> I got it (USLEEP) up to only 1.40989 ms, but in general can confirm the
> effect.
> Either we approach with too many swap calls (and the driver blocks the
> third
> frame) or the simply swap function load under stress takes too long...
> gonna
> check, but i fear the latter :-(
>
> --
> You are receiving this mail because:
> You are on the CC list for the bug.
>
Comment 40 Thomas Lübking 2013-08-29 14:58:58 UTC
(In reply to comment #39)
> Hi,
> is this really the same bug?
Ultimately yes.

> I don't see abnormal CPU load on my system. 
Because we prevent it (plus with actual triple buffering there's no wait, no busy wait, no cpu load)

Disabling vsync for triple buffering misdetection is a false positive result of the CPU load preseveration.

I'm pretty sure I know what causes this (misdetection - we try to paint too often) and hopefully have a patch this evening - but 4.11.1 is tagged tonight as well :-(


PS: please don't quote unneeded stuff, you're posting to a bugtracker and everything is there in a list of comments anyway ;-)
Comment 41 Thomas Lübking 2013-08-29 23:43:30 UTC
Not what I thought, but ultimately the same ;-)
https://git.reviewboard.kde.org/r/112368/

The patch works when starting kwin w/ ShowFPS enabled, what had been a reliable way to break detection.
Comment 42 Thomas Lübking 2013-09-03 13:57:23 UTC
*** Bug 324437 has been marked as a duplicate of this bug. ***
Comment 43 Thomas Lübking 2013-09-10 11:51:51 UTC
*** Bug 324742 has been marked as a duplicate of this bug. ***
Comment 44 Thomas Lübking 2013-09-10 21:53:09 UTC
*** Bug 324770 has been marked as a duplicate of this bug. ***
Comment 45 Nikos Chantziaras 2013-09-14 14:27:59 UTC
(In reply to comment #41)
> Not what I thought, but ultimately the same ;-)
> https://git.reviewboard.kde.org/r/112368/
> 
> The patch works when starting kwin w/ ShowFPS enabled, what had been a
> reliable way to break detection.

I applied this to kde-base/kwin on Gentoo and it fixes the issue for me. VSync is now enabled on every login.
Comment 46 Thomas Lübking 2013-09-25 21:15:58 UTC
Git commit 0c7fe70a1a89c844f8fbdcc7b3799852ad14d5cd by Thomas Lübking.
Committed on 29/08/2013 at 23:30.
Pushed by luebking into branch 'KDE/4.11'.

fix scheduling the repaints

repaints caused by effects so far polluted the timing calculations
since they started the timer on the old vsync offset
This (together with undercut timing) lead to multiple frames in
the buffer queue, and ultimately to a blocking swap

For unsynced painting, it simply caused wrong timings - leading to
"well, kinda around 60Hz - could be 75 as just well".

REVIEW: 112368
that part is fixed in 4.11.2

M  +27   -6    kwin/composite.cpp
M  +10   -0    kwin/eglonxbackend.cpp
M  +10   -0    kwin/glxbackend.cpp

http://commits.kde.org/kde-workspace/0c7fe70a1a89c844f8fbdcc7b3799852ad14d5cd
Comment 47 Nikos Chantziaras 2013-09-25 22:01:29 UTC
Just applied this. VSync is working fine on login.
Comment 48 BasioMeusPuga 2013-10-02 13:35:21 UTC
Still getting this in 4.11.2. Am I missing something?
Comment 49 Thomas Lübking 2013-10-02 13:48:17 UTC
You still require either triple buffering enabled (iircs it's disabled by default in the nvidia driver, /var/log/Xorg.0.log will tell whether it's activated) - or export __GL_YIELD="USLEEP" by hand.

The patch is only supposed to fix triple buffering misdetection. We've not found a way to convice the driver into usleep yielding after starting the process.
Comment 50 svenssonsven8 2013-10-06 13:31:11 UTC
Unfortunately, after upgrading to 4.11.2, I am still experiencing both the tearing (starting a few seconds after the desktop loads) and high cpu load (kwin raising to 60-80% when watching a video and 30-40% when just moving some windows around). Was the patch supposed to prevent the tearing but not the high cpu load? I never had the same problem before I've upgraded to 4.11.0, so I guess that I have tripple buffering working ok?
Comment 51 Thomas Lübking 2013-10-06 14:02:54 UTC
(In reply to comment #50)
> Was the patch supposed to prevent the tearing but not
> the high cpu load?

The patch was supposed to detect triple buffering detection.

> I never had the same problem before I've upgraded to
> 4.11.0, so I guess that I have triple buffering working ok?
No. With either triple buffering or __GL_YIELD="USLEEP" there should be no CPU load.
The TB detection doesn't matter - if you had triple buffering, the swap wouldn't have to wait, would not perform a busy wait and not load the CPU by that.

run:

   grep -i triple  /var/log/Xorg.0.log

if that doesn't print sth. like

     [    14.137] (**) NVIDIA(0): Option "TripleBuffer" "True"

triple buffering is rather not enabled.


Please check for tearing/CPU load either after triple buffering is enabled for sure or you ran

export __GL_YIELD="USLEEP"
kwin --replace &

to prevent busy waits in the driver.

However, with no swap interval set (thus tearing) there should be no CPU load by the swapping either (because it doesn't have to wait for the retrace)
Maybe also check that you didn't set the MaxFPS (to sth. scary like 999) or the RefreshRate (~/.kde/share/config/kwinrc)
Comment 52 svenssonsven8 2013-10-12 10:26:54 UTC
Hi Thomas,
thank you for your reply and I apologize for somewhat late reply. I didn't get any output after I run:

grep -i triple /var/log/Xorg.0.log

so I decided to manually add export __GL_YIELD="USLEEP" to /etc/profile. On the following reboot I didn't notice any tearing and also, the kwin is constantly running low (3-4%), even when watching non-fullscreen videos and moving the windows around. I am very thankful!

Could you please just confirm that this is the correct solution in my case (to manually add the variable to /etc/profile)?

Kind Regards,
Veroslav
Comment 53 Nikos Chantziaras 2013-10-12 10:34:37 UTC
(In reply to comment #52)
> so I decided to manually add export __GL_YIELD="USLEEP" to /etc/profile.
> [...]
> Could you please just confirm that this is the correct solution in my case
> (to manually add the variable to /etc/profile)?

The overall better solution is to enable triple buffering. This is going to benefit you not only with KWin, but with OpenGL applications in general.
Comment 54 Thomas Lübking 2013-10-12 11:24:17 UTC
(In reply to comment #52)

> Could you please just confirm that this is the correct solution in my case
> (to manually add the variable to /etc/profile)?

The "proper" way here would be to export it from an executable shell scriptlet in ~/.kde/env (executed by startkde), but exporting it in /etc/profile will do the job as well of course.

Nikos is right in suggesting to activate trimple buffering, though.

You may also use the kwin loader script attached to this bug (which allows you to set various environment vars affecting kwin) - by placing it into an upper position of the PATH (eg. ~/bin or /usr/local/bin) it will shadow /usr/bin/kwin on execution and set the relevant environment for KWin only (not affecting other processes, though w/o triple buffering, many or all vsyncing games should run into the same issue)
Comment 55 svenssonsven8 2013-10-12 13:20:34 UTC
Thank you for quick replies.

I was going to enable triple buffering, but I've never done this before and would need some help.

If I understood it correctly, I would need to add the following line to xorg.conf:

Option         "TripleBuffer" "True"

and remove the __GL_YIELD="USLEEP" line altogether? I don't appear to have xorg.conf file so I guess I need to create one manually. Should I just create an empty file /etc/X11/xorg.conf and add

Option         "TripleBuffer" "True"

to it, or is it safer to simply run nvidia-xconfig, let it create it, and then add the line above? I am sorry for the noob questions, I just want to make sure that I don't destroy my system :)

Thank you in advance.

Regards,
Veroslav
Comment 56 Thomas Lübking 2013-10-12 14:17:03 UTC
(In reply to comment #55)
> Thank you for quick replies.
> 
> I was going to enable triple buffering, but I've never done this before and
> would need some help.

xorg.conf is deprecated, instead add a snippet into /etc/X11/xorg.conf.d/20-nvidia.conf (your distro should rather add that file anyway) containing:

Section "Device"
        Identifier "Default nvidia Device"
        Driver "nvidia"
        Option "NoLogo" "True"
        Option "CoolBits" "1"
        Option "TripleBuffer" "True"
EndSection


CoolBits allows overclocking (on some devices) an NoLogo removes the startup advertisement ;-)

If you have such file and it has some other entries do not remove those, just add TripleBuffer to the "Device" section
Comment 57 S. Christian Collins 2013-10-12 15:20:20 UTC
To answer your question, Veroslav: if I'm not mistaken, enabling both triple buffering in the NVIDIA driver and __GL_YIELD="USLEEP" in /etc/profile is ideal. That way games can use Triple Buffering and kwin can use its lower-cpu sync method.
Comment 58 svenssonsven8 2013-10-14 08:16:29 UTC
Thank you both Thomas and Christian,

I've added the xorg.conf snippet posted in comment #56 to /usr/share/xorg.conf.d/20-nvidia.conf (had to create the file as it wasn't there, and also, the path to xorg.conf.d on K/Ubuntu seems to differ to the one in Thomas' example).

On the restart, everything was working very well (no tearing and miminal kwin cpu).

Also: 

grep -i triple /var/log/Xorg.0.log 

gave an output similar to:

 [ 14.137] (**) NVIDIA(0): Option "TripleBuffer" "True" 

Now I just need to enable __GL_YIELD="USLEEP" as Christian suggested in comment #57.

Very satisfied, thanks again!

Regards,
Veroslav
Comment 59 Nikos Chantziaras 2013-10-14 08:26:30 UTC
(In reply to comment #58)
> I've added the xorg.conf snippet posted in comment #56 to
> /usr/share/xorg.conf.d/20-nvidia.conf

You should put that file in /etc/X11/xorg.conf.d/ (create the path if it doesn't exist.)
Comment 60 svenssonsven8 2013-10-14 10:01:36 UTC
(In reply to comment #59)

That was my original thought as well, but then I read in several places that /etc/X11/xorg.conf.d is not being used anymore, so I am a bit confused as to which one it really is (many conflicting answers on this topic). Will test /etc/X11/xorg.conf.d, although the driver seems to be able to find my .conf file, as the output of 

grep -i triple /var/log/Xorg.0.log 

seems to suggest. Thank you for the input, Nikos.
Comment 61 Nikos Chantziaras 2013-10-14 10:58:34 UTC
(In reply to comment #60)
> That was my original thought as well, but then I read in several places that
> /etc/X11/xorg.conf.d is not being used anymore

It is used. It's the standard place where the X.Org server is looking for configuration files and it deprecates the old, monolithic /etc/X11/xorg.conf file.
Comment 62 Twisted Lucidity 2013-10-14 13:48:15 UTC
> [xorg.conf.d] is used. It's the standard place where the X.Org server is looking for
> configuration files and it deprecates the old, monolithic /etc/X11/xorg.conf
> file.

To be clear, "xorg.conf.d" is a folder that may contain a set of files that configure various devices, cards etc. It's no longer one file (i.e. "xorg.conf" the file is old [deprecated] and not used).

That all said, I found that my install of Kubuntu 13.04 *is* using xorg.conf, so that's the file I updated with the details from#56 (I am using the nvidia proprietary drivers).

I also added this line into the start of /etc/profile from #51:
export __GL_YIELD="USLEEP"

KDE now starts for me with no tearing. No need to go into "System Settings/Desktop Effects/Advanced", changing the compositing type, "Apply" and then "Revert" to get a tear free display.
Comment 63 Thomas Lübking 2013-10-19 20:39:09 UTC
*** Bug 326264 has been marked as a duplicate of this bug. ***
Comment 64 Gunther Piez 2013-12-11 12:41:32 UTC
Just hit by this. I was wondering why anti-tearing (which works fine on my system) seems to get disabled after a while for no apparent reason.

After reading the debug output from "kwin"
>> It seems you are using the nvidia driver without triple buffering...

things became clearer. Unfortuntely neither setting __GL_YIELD=USLEEP (causes major fps drops and stuttering during gaming) nor activating triple buffer (adding another 16 ms to the overally latency) is acceptable for me, while the "cpu spikes" are perfectly fine, because the have no noticable negative impact on my box (Desktop, i7@4.6 GHz, GTX 570).

I understand that for a laptop this may be very different, but forcefully disable vsync to "protect" users is IMHO the wrong solution, a log message is sufficient and my be the right thing.

I have adopted the solution of starting kwin with a script which sets __GL_YIELD=USLEEP which seems to work fine and make sure __GL_YIELD isn't set anywhere else. 

I use __GL_THREADED_OPTIMIZATIONS=1 (which is a nice 50% fps improvement during gaming essentially for free), but I have no idea how it affects kwin, maybe thats the reason the "cpu spikes" I have seen in the first place didn't have any negative impact on performance.
Comment 65 Thomas Lübking 2013-12-11 14:02:39 UTC
(In reply to comment #64)

It's also possible to utilize nvidia-settings application profiles for this.

> I use __GL_THREADED_OPTIMIZATIONS=1, but I have no idea how it affects kwin

"depends"
LD_PRELOAD="libpthread.so.0 libGL.so.1" __GL_THREADED_OPTIMIZATIONS=1 kwin --replace &

-> see https://git.reviewboard.kde.org/r/111354/

As for the default handling of this:
"not working vsync" would a "remaining" issue while "kwin suddenly eats 30% cpu" would be a severe regression - and for default behavior, we got to care more about the users who know "where the power button is" than those who know how to use google and tune their system anyway.
Comment 66 Gunther Piez 2013-12-11 15:23:47 UTC
(In reply to comment #65)
> (In reply to comment #64)
> 
> It's also possible to utilize nvidia-settings application profiles for this.

I created a profile with key "GLYield" and a value "USLEEP" which has (should have) the same effect as setting __GL_YIELD, but unfortunately (and probably expected) kwin still disables antitearing because the environment variable is not set, even if the functionality is there. Is it possible to set KDE_TRIPLE_BUFFERING even while triple buffering is disabled to avoid this?
> 
> > I use __GL_THREADED_OPTIMIZATIONS=1, but I have no idea how it affects kwin
> 
> "depends"
> -> see https://git.reviewboard.kde.org/r/111354/

I have disabled it now, as ist is not clear how many gl* calls which are returning information from a context are in kwin. BTW, in wine this can be quite easily avoided during runtime with WINEDEBUG=-all.
> 
> As for the default handling of this:

I understand :-)

Anyway, there are multiple workarounds for me.
Comment 67 Thomas Lübking 2013-12-11 22:12:13 UTC
(In reply to comment #66)
> Is it possible to set KDE_TRIPLE_BUFFERING even while triple buffering 
> is disabled to avoid this?

NO!

The env only exists because triple buffering detection is heuristic.
If you "lie" about this, kwin will enter a non-blocking path while glSwapBuffers will actually block. You'll run off-sync and spend more and more time in (CPU expensive) waiting for the buffer swap.

> kwin still disables antitearing because the environment
> variable is not set, even if the functionality is there. 
Ok, I frankly never tested but had just expected the driver would set that environment from the profile.
Comment 68 Gunther Piez 2013-12-12 11:03:30 UTC
> > kwin still disables antitearing because the environment
> > variable is not set, even if the functionality is there. 
> Ok, I frankly never tested but had just expected the driver would set that
> environment from the profile.

I believe it is not possible a dynamic library (at least not easily) to change the environment of a calling process. The other way around, setting __GL_YIELD from kwin for libnvidia-glcore.so or libGL.so before the libraries are actually loaded could be possible with some evil (and non portable) ldopen() trickery. 
I am not sure how I would approach this, maybe making kwin only a stub which sets the environment and loads the real kwin-worker.
Comment 69 Nikos Chantziaras 2013-12-16 13:00:34 UTC
The issue with vsync is back. It doesn't get consistently applied on login.

KDE 4.11.4. Triple buffering is enabled:

$ grep -i triple /var/log/Xorg.0.log
[    21.596] (**) NVIDIA(0): Option "TripleBuffer" "True"
Comment 70 Thomas Lübking 2013-12-16 14:50:24 UTC
With what kind of change (KDE update from - to, etc.)
For what measured swap time? (Watch debug output for 1212, activate it in "kdebugdialog")

For a hotfix, "export KDE_TRIPLE_BUFFERING=1"
Comment 71 Nikos Chantziaras 2013-12-17 12:32:47 UTC
(In reply to comment #70)
> With what kind of change (KDE update from - to, etc.)
> For what measured swap time? (Watch debug output for 1212, activate it in
> "kdebugdialog")

Not sure exactly. It's been a while since I used the Linux installation. I think I upgraded from 4.11.2 to 4.11.4. At the same time, I also upgraded the GPU from a GTX 560 Ti to a GTX 780. There were lots of other updates as well (kernel, X.Org, tons of libraries, NVidia driver.)

kwin says:

KWin::SwapProfiler::end: Triple buffering detection: "Available"  - Mean block time: 0.11997 ms

The above is with vsync working. Should I try to reproduce the problem and then get the debug message again? It happens somewhat rarely (1 out of 10 logins or so.)
Comment 72 Thomas Lübking 2013-12-17 15:30:41 UTC
(In reply to comment #71)

> The above is with vsync working. Should I try to reproduce the problem and
> then get the debug message again?

Yes please. Use "kdebugdialog --fullmode" to redirect kwin (1212) output to a file for this purpose.
Smells like someth. hangs (for other reasons) and you miss the clip value (1ms) by few ns or so.
Comment 73 Luke McCarthy 2013-12-17 23:43:29 UTC
*** Bug 328781 has been marked as a duplicate of this bug. ***
Comment 74 Nikos Chantziaras 2013-12-21 20:34:22 UTC
It's been four days now and I wasn't able to reproduce this again even once :-/ Although I did now upgrade to KDE 4.12.0, kwin is still at 4.11.4 (and AFAIK, there will be no 4.12 release of kwin). Unless the update did change something that would result in the bug not triggering anymore.

Anyway, on a related matter: since I upgraded my GPU to a GTX 780, performance has increased immensely. Triple buffering was very useful before, as falling below 60FPS would introduce quite some input lag and TB would mitigate this at the cost of an additional frame of input lag in 60FPS situation. That was the lesser evil, so TB was a good thing.

Now, 60FPS is a given in pretty much everything I care to throw at the graphics card. The only thing TB now does is the additional input lag, which isn't a compromise anymore, but a clear cut drawback. Playing "Metro Last Light" or "Left 4 Dead 2" for example with TB enabled makes the mouse input feel "floaty." This isn't as important in 3D applications, but in games, it's quite important for the controls to feel "snappy."

So the question is: is/will kwin be able to cope with this? Linux as a gaming platform is becoming more important as of late.
Comment 75 Gunther Piez 2013-12-23 16:06:28 UTC
(In reply to comment #74)
> It's been four days now and I wasn't able to reproduce this again even once
> :-/ Although I did now upgrade to KDE 4.12.0, kwin is still at 4.11.4 (and
> AFAIK, there will be no 4.12 release of kwin). Unless the update did change
> something that would result in the bug not triggering anymore.

I have seen this performance drop once - after a few hours of gaming in WoW. ctrl+alt+f1 and back to X fixed it eventually - it may be possibly a nvidia driver bug.
> 
> Anyway, on a related matter: since I upgraded my GPU to a GTX 780,
> performance has increased immensely.

I would expect that ^^

> So the question is: is/will kwin be able to cope with this? Linux as a
> gaming platform is becoming more important as of late.

I do not believe that generic linux distros will be of any importace for the gaming software industry ever - but anyway, here is my "bugfix":

Disable triple buffering generally and create an executable shell script

#!/bin/bash
__GL_YIELD=USLEEP /usr/bin/kwin

in /usr/local/bin/kwin, and make sure /usr/local/bin is before /usr/bin in your $PATH. So I get all, low cpu usage, 60 fps in desktop effects, and lag-free gaming.
Comment 76 Thomas Lübking 2013-12-29 13:10:29 UTC
*** Bug 329297 has been marked as a duplicate of this bug. ***
Comment 77 S. Christian Collins 2014-02-21 19:24:45 UTC
I just updated to KDE 4.12.2, and Vsync no longer works when tearing prevention is set to "Automatic" in the desktop effects settings. I have the following in my /etc/profile:

export __GL_YIELD="USLEEP"

Vsync was working fine before the update from 4.12.0 to 4.12.2. Turning on kwin logging doesn't seem to provide anything helpful. Vsync does work, however if I select "Full scene repaints" for kwin tearing prevention. I'm not sure if this is a good thing or not. Are there any downsides to using "Full scene repaints" vs. "Automatic"?

My gfx card: NVIDIA GeForce 7900 GS w/ 256 MB RAM (PCI Express)
NVIDIA video driver: 304.116
Comment 78 Thomas Lübking 2014-02-21 20:28:11 UTC
Likely https://git.reviewboard.kde.org/r/115523/
The legacy 304.xxx driverd don't provide "glxinfo | grep buffer_age", do they?
Comment 79 S. Christian Collins 2014-02-21 21:00:03 UTC
glxinfo | grep buffer_age
...returns nothing on my system.
Comment 80 Thomas Lübking 2014-02-21 21:06:40 UTC
It's the bug covered by that patch then (and has nothing to do with yielding, triple buffering etc. it's just broken)
Patch will be in 4.11.7 (released on March, 5th)

Full scene repaints has slight overhead (in case anyone reads this: "on the nvidia blob", on MESA frontbuffer copying is unsuable) - and more, if you make "excessive" use of blurring.
Comment 81 Thomas Lübking 2014-02-24 19:38:31 UTC
Git commit b00cc9cda191795ceae874526c7bd57b2a832982 by Thomas Lübking.
Committed on 05/02/2014 at 23:16.
Pushed by luebking into branch 'KDE/4.11'.

fix frontbuffer copying swap preference

REVIEW: 115523
Related: bug 330794

M  +1    -1    kwin/scene_opengl.cpp

http://commits.kde.org/kde-workspace/b00cc9cda191795ceae874526c7bd57b2a832982
Comment 82 Thomas Lübking 2014-03-04 17:45:23 UTC
*** Bug 331720 has been marked as a duplicate of this bug. ***
Comment 83 Nikos Chantziaras 2014-03-04 20:21:32 UTC
This is from the changelog of the latest NVidia driver release (334.21). Is this relevant or totally unrelated?

"Fixed a bug in the GLX_EXT_buffer_age extension where incorrect ages would be returned unless triple buffering was enabled."
Comment 84 Thomas Lübking 2014-03-04 21:28:19 UTC
That should be related to bug #330794
It has no impact on the busy wait (which i didn't test for two driver pointreleases, though)
Comment 85 S. Christian Collins 2014-03-14 15:48:58 UTC
I'm still having this issue in KDE 4.12.3. Is the patch in comment 81 included in this version?
Comment 86 Thomas Lübking 2014-03-14 18:28:15 UTC
There's no 4.12 version of kwin - you'll have to compare "kwin --version" (and require 4.11.7)
Comment 87 S. Christian Collins 2014-03-21 17:04:44 UTC
Aah... I'm only on version 4.11.6. Thanks for the info.
Comment 88 S. Christian Collins 2014-04-18 19:28:35 UTC
I'm on kwin 4.11.8 now and I can confirm that vsync is working again on my system.
Comment 89 AnAkkk 2014-05-30 21:46:31 UTC
Changelog of the latest Nvidia driver:
"Fixed a performance regression when running KDE with desktop effects using the OpenGL compositing backend."

https://devtalk.nvidia.com/default/topic/748536
Comment 90 Thomas Lübking 2014-05-31 18:27:53 UTC
bug #244253 - i don't think it will be the busy wait (but just updated the driver - wait for reboot ;-)
Comment 91 Philip Sequeira 2014-05-31 19:47:43 UTC
I had a big performance regression after the last driver update and hadn't gotten around to debugging it. I won't be on the computer with the nvidia card for another day or so, so I can't test the new one yet, but it's probably that, and not the sleeping business.

usleep is still working great for me, though, as far as this issue is concerned.
Comment 92 negora 2014-08-05 07:39:24 UTC
I've been suffering the tearing in KDE since many versions ago. I've had a nVidia GeForce 9600GT and now I've a nVidia GeForce 210. I'm running Kubuntu 14.04 (KDE 4.13.2, KWin 4.11.10). Everytime that my computer starts or re-starts, the tearing re-appears and I've to change the OpenGL implementation in order to recover the vertical synchronization, no matter which version I set (1.2, 2.0 or 3.1). I'm using the official nVidia drivers, version 331.38 (without updates). Is it related to this bug? If it isn't, apologizes.
Comment 93 retired 2014-08-05 09:45:12 UTC
(In reply to negora from comment #92)
> I've been suffering the tearing in KDE since many versions ago. I've had a
> nVidia GeForce 9600GT and now I've a nVidia GeForce 210. I'm running Kubuntu
> 14.04 (KDE 4.13.2, KWin 4.11.10). Everytime that my computer starts or
> re-starts, the tearing re-appears and I've to change the OpenGL
> implementation in order to recover the vertical synchronization, no matter
> which version I set (1.2, 2.0 or 3.1). I'm using the official nVidia
> drivers, version 331.38 (without updates). Is it related to this bug? If it
> isn't, apologizes.

It is related. __GL_YIELD="USLEEP" "fixes" it.

Create file.sh with similar content:

# /bin/sh
export __GL_YIELD="USLEEP"
kwin --replace &

Then add this file to KDE startup. My Kwin is set to OGL 3.1 + Reuse content + Raster
Comment 94 negora 2014-08-06 09:20:42 UTC
Thank you Piotr Kloc. It was very helpful!

I hope that the KDE team is able to solve it definitively in a future release. Thank you for your hard work.
Comment 95 George Labuschagne 2014-08-17 09:15:35 UTC
Hi (In reply to Piotr Kloc from comment #93)
> (In reply to negora from comment #92)
> > I've been suffering the tearing in KDE since many versions ago. I've had a
> > nVidia GeForce 9600GT and now I've a nVidia GeForce 210. I'm running Kubuntu
> > 14.04 (KDE 4.13.2, KWin 4.11.10). Everytime that my computer starts or
> > re-starts, the tearing re-appears and I've to change the OpenGL
> > implementation in order to recover the vertical synchronization, no matter
> > which version I set (1.2, 2.0 or 3.1). I'm using the official nVidia
> > drivers, version 331.38 (without updates). Is it related to this bug? If it
> > isn't, apologizes.
> 
> It is related. __GL_YIELD="USLEEP" "fixes" it.
> 
> Create file.sh with similar content:
> 
> # /bin/sh
> export __GL_YIELD="USLEEP"
> kwin --replace &
> 
> Then add this file to KDE startup. My Kwin is set to OGL 3.1 + Reuse content
> + Raster

Thanks so much! I also had this exact same problem. Terrible tearing after every reboot. The only why was to manually set the renderer to OGL 2.0 and then back to OGL 3.1 to resolve this.

However your script works perfectly and now I don't have any more tearing :)
Comment 96 Philip Sequeira 2014-08-17 16:27:04 UTC
There's no need to replace kwin if you set it early enough. If you put the script in ~/.kde/env instead of Autostart (or from the gui, "Pre-KDE startup" instead of "Startup") it'll run before kwin starts, and all you need is the export.
Comment 97 Shmerl 2014-10-20 19:07:50 UTC
I'm hit by this bug as well.
System:

Debian testing
KDE 4.14.1
Kwin: 4.11.12-2+b1
Nvidia GeForce GT 620, driver 343.22

Is there a way to set the workaround for KWin only? Putting __GL_YIELD='USLEEP' in a script in $HOME/.kde/env sets it to all applications which run later, which may be not a good idea for some other cases.
Comment 98 Shmerl 2014-10-20 19:09:34 UTC
Also, for me this issue surfaces when I switch to tty console from the main session or lock the screen (with a smple KDE locker). All other times CPU usage is normal.
Comment 99 Thomas Lübking 2014-10-20 19:21:19 UTC
You hit it all the time, just in those occasions the sync "lasts" long enough (forever) to really show off.

Check the kwin script from here:
https://github.com/luebking/KLItools

Put it somewhere up in $PATH so that it shadows the kwin binary (/usr/local/bin, evtl. ~/bin) - ensure it's executable.
Comment 100 Thomas Lübking 2014-11-22 00:27:58 UTC
*** Bug 341166 has been marked as a duplicate of this bug. ***
Comment 101 Saman 2014-12-13 09:19:38 UTC
(In reply to Thomas Lübking from comment #99)
> You hit it all the time, just in those occasions the sync "lasts" long
> enough (forever) to really show off.
> 
> Check the kwin script from here:
> https://github.com/luebking/KLItools
> 
> Put it somewhere up in $PATH so that it shadows the kwin binary
> (/usr/local/bin, evtl. ~/bin) - ensure it's executable.

I'm using Fedora 21 64 bit. My KDE version is 4.14.3. When I use that script my Menu bar disaapeared. I've come up with this simple script:

$cat kwin
#!/bin/sh

#Put this script in the PATH before the actual kwin. First set the actual path of kwin in kwinPath variable

kwinPath=/bin

export __GL_YIELD="USLEEP"
${kwinPath}/kwin "$@"

I've named it kwin for overshadowing the actual kwin. In my case I put it in /usr/lib64/ccache because that path is before /bin in my PATH variable. So I don't need to replace kwin or set USLEEP for all programs.
Comment 102 Thomas Lübking 2014-12-13 14:32:48 UTC
(In reply to Saman from comment #101)

> I'm using Fedora 21 64 bit. My KDE version is 4.14.3. When I use that script
> my Menu bar disaapeared. I've come up with this simple script:

What do you mean by "menubar disappeared"? plasma-desktop crashed?
That'd be coincidental at best.
The only other thing that script effectively does is to run nvidia-settings to check for and get rid of FXAA in case.
Comment 103 Saman 2014-12-15 19:25:36 UTC
(In reply to Thomas Lübking from comment #102)
> (In reply to Saman from comment #101)
> 
> > I'm using Fedora 21 64 bit. My KDE version is 4.14.3. When I use that script
> > my Menu bar disaapeared. I've come up with this simple script:
> 
> What do you mean by "menubar disappeared"? plasma-desktop crashed?
> That'd be coincidental at best.
> The only other thing that script effectively does is to run nvidia-settings
> to check for and get rid of FXAA in case.

Sorry for replying too late. I've been too busy. Yes I mean plasma-desktop. I've tested again and it works as a charm! I don't know that bug was random or related to Fedora 21 because today I've updated my distro and tested it again. By the way, I didn't know kwin_gles. It's amazing. Thank you for your great work.
Comment 104 jeremy9856 2015-06-08 15:22:29 UTC
Is "__GL_YIELD="USLEEP" still needed on plasma 5 with nvidia ?
Comment 105 Thomas Lübking 2015-06-08 18:25:35 UTC
(In reply to jeremy9856 from comment #104)
> Is "__GL_YIELD="USLEEP" still needed on plasma 5 with nvidia ?

Unless you're using triple buffering: yes.
Nothing has changed about the situation.
Comment 106 Michael Mikowski 2015-06-28 05:31:25 UTC
Autostart scripts work differently in KDE 5 as detailed here: https://docs.google.com/spreadsheets/d/1kLIYKYRsan_nvqGSZF-xJNxMkivH7uNdd6F-xY0hAUM.  Check out the 15.04 tab, starting at row 101.
Comment 107 Dāvis 2015-07-08 23:20:04 UTC
I'm using latest nvidia driver 352.21 and I've enabled TripleBuffering in xorg.conf

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 650 Ti"
    Option         "NoLogo"
    Option         "ModeValidation" "AllowNonEdidModes"
    Option         "AddARGBGLXVisuals"  "true"
    Option         "TripleBuffer" "true"
EndSection

But I still get this warnging
kwin_core:
 It seems you are using the nvidia driver without triple buffering
 You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced swaps
 Preferably, enable the TripleBuffer Option in the xorg.conf Device
 For this reason, the tearing prevention has been disabled.
 See https://bugs.kde.org/show_bug.cgi?id=322060

I've latest kwin from git master (f6458fa1e8e92fdf16a1acc961703d229894454c), but it was also same for previous stable versions.

Any idea?
Comment 108 Dāvis 2015-07-08 23:31:24 UTC
(In reply to Dāvis from comment #107)
> I'm using latest nvidia driver 352.21 and I've enabled TripleBuffering in
> xorg.conf
> 
> Section "Device"
>     Identifier     "Device0"
>     Driver         "nvidia"
>     VendorName     "NVIDIA Corporation"
>     BoardName      "GeForce GTX 650 Ti"
>     Option         "NoLogo"
>     Option         "ModeValidation" "AllowNonEdidModes"
>     Option         "AddARGBGLXVisuals"  "true"
>     Option         "TripleBuffer" "true"
> EndSection
> 
> But I still get this warnging
> kwin_core:
>  It seems you are using the nvidia driver without triple buffering
>  You must export __GL_YIELD="USLEEP" to prevent large CPU overhead on synced
> swaps
>  Preferably, enable the TripleBuffer Option in the xorg.conf Device
>  For this reason, the tearing prevention has been disabled.
>  See https://bugs.kde.org/show_bug.cgi?id=322060
> 
> I've latest kwin from git master (f6458fa1e8e92fdf16a1acc961703d229894454c),
> but it was also same for previous stable versions.
> 

from journal:
Extensions: shape: 0x "11"  composite: 0x "4"  render: 0x "b"  fixes: 0x "50"  randr: 0x "14"  sync: 0x "31"  damage: 0x  "11" 
kwin_core: screens:  2 desktops:  4
kwin_core: Initializing OpenGL compositing
kwin_core: Choosing GLXFBConfig 0x115 X visual 0x2b depth 24 RGBA 8:8:8:0 ZS 0:0
kwin_core: Initializing fences for synchronization with the X command stream
kwin_core: 0x20071: Buffer detailed info: Buffer object 1 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_DYNAMIC_DRAW) will use SYSTEM HEAP memory as the source for buffer object operations.
kwin_core: 0x20071: Buffer detailed info: Buffer object 1 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_DYNAMIC_DRAW) has been mapped WRITE_ONLY in SYSTEM HEAP memory (fast).
kwin_core: OpenGL 2 compositing successfully initialized
kwin_core: Vertical Refresh rate  75 Hz ( "primary screen" )
kwin_core: 0x20071: Buffer detailed info: Buffer object 2 (bound to GL_ELEMENT_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations.
kwin_core: Successfully loaded built-in effect:  "blur"
kwin_core: Activation: No client active, allowing
kwin_core: Successfully loaded built-in effect:  "contrast"
kwin_core: Session path: "/org/freedesktop/login1/session/c2"
kwin_core: 0x20071: Buffer detailed info: Buffer object 3 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations.
kwin_core: 0x20071: Buffer detailed info: Buffer object 7 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations.
kwin_core: Triple buffering detection: "NOT available"  - Mean block time: 7.79337 ms
Comment 109 Thomas Lübking 2015-07-09 04:47:36 UTC
See bug #343184 - triple buffer detection is unfortunately heuristic and something™ during plasmashell startup (only, restarting kwin during the session does not seem to expose that) causes blocking (or at least "slow") swaps :-(
Comment 110 retired 2015-07-09 10:16:46 UTC
There's a nice workaround that can substitute Vsync. It works quite nicely for me.
https://wiki.archlinux.org/index.php/NVIDIA#Avoid_tearing_with_GeForce_500.2F600.2F700.2F900_series_cards
It also allows to have lower latency when windows are being moved.
Comment 111 Martin Flöser 2017-07-28 16:16:00 UTC
*** Bug 382831 has been marked as a duplicate of this bug. ***
Comment 112 Unknown 2017-07-28 18:10:15 UTC
My bug (382831) was marked as a duplicate of this bug, so I guess I'll throw my two cents in, here. I read through this bug -- a lot of it pretty technical and over my head. My issue with KwWin is screen-tearing while using Nvidia proprietary drivers:

BEHAVIOR
When I enable the screen tearing prevention in the Compositor settings, click apply, then drag & move a window around, screen tearing seems fixed for a few seconds, but the tearing returns after that. Changing the settings again produces the same result: tearing is fixed for a few seconds, but dragging and moving a window across the screen will show that screen tearing returns.

EXPECTED BEHAVIOR
When tearing prevention is applied (regardless of the setting or the driver used), the settings stick after you click APPLY or OK, and the tearing is gone.

ISSUE DISCOVERED WHILE USING
Kubuntu 17.04 x64
KDE Plasma 5.10.4 (via ppa:kubuntu-ppa/backports)
KDE Frameworks 5.36.0
Qt 5.7.1
Linux kernel 4.10.0-28-generic
Nvidia GTX 1080 graphics card
Nvidia 384.59 proprietary driver (via ppa:graphics-drivers/ppa)
Comment 113 Ryein Goddard 2017-09-08 02:03:00 UTC
This also effects me on 16.04 Kubuntu with backports ppa and the Nvidia Cuda driver package compatible with Cuda 8.0

Adding export __GL_YIELD="USLEEP"
to /etc/profile.d/kwin.sh fixed the problem although I am sure this is probably just a work around.
Comment 114 Mahendra Tallur 2017-12-21 10:27:20 UTC
Hi everyone ! Sorry for reducing the signal to noise ratio here :-)

I would like to ask the developers this question :

I know you are, for reasons I understand and respect, against all kinds of "workarounds". 

However... This Nvidia + kwin tearing story has been around for ages. I mean, 5+ years is absolutely huge when it comes to software development. Power-users can look for the solution but most average users won't and will be disappointed. I witnessed this around me.

If we agree it's not a good thing to build-in a workaround for this specific use case, what could be done ? Some kind or "ironic option" in KWin settings explicitly referring to the bug it's working around (as I guess it's Nvidia's fault to some extent) ? Or shouldn't at least distro providers do something about it even if it's not built in Kwin ?

Sorry for my useless blabbering :-) and congrats again for all you've been doing. I love Kwin & the KDE ecosystem :-)
Comment 115 Mahendra Tallur 2017-12-21 10:39:07 UTC
BTW, after re-reading myself I had the feeling my post sounded a little strange, so I apologize if it sounded a little harsh, which was absolutely not what I meant.

My point was just : in my experience (4 different PCs, about 6 different Nvidia GPUs), this "export __GL_YIELD=USLEEP" workaround is always necessary to prevent tearing. Also all my Nvidia + KDE using friends had to apply it as well.

I know this is bad to apply a workaround systematically...  What other solution do we have ? Is it up to the distros ?

Best regards
Comment 116 Lastique 2017-12-21 15:22:20 UTC
"export __GL_YIELD=USLEEP" is not needed to prevent tearing if you're using triple buffering (i.e. have it enabled in xorg.conf and define KWIN_TRIPLE_BUFFER=1). However, that option can reduce CPU consumption somewhat, so it's probably beneficial anyway. The problem is that __GL_* variables are driver-specific and should preferably be defined somewhere global (e.g. /etc/environment) so that all applications are affected, not just kwin.
Comment 117 Martin Flöser 2017-12-21 15:52:51 UTC
Maintainer speaking:

We will not add any workarounds! This has various reasons:

* We lack developers with the expertise to understand the problem
* We lack developers with NVIDIA cards
* The last patch we did for NVIDIA specific issue caused severe issues which required an emergency release
* We have no chance to properly understand what's going on due to NVIDIA driver being proprietary
* If the NVIDIA driver as thee only driver needs such workarounds, NVIDIA should fix their drivers or contribute patches

Last but not least: X11 is going to be feature frozen after 5.12. We are almost in feature freeze for 5.12 and given that we now have the Christmas break it's unlikely that any feature for NVIDIA is going to land before. I don't see where the devs should come from.

As NVIDIA does not support Wayland the feature freeze for X11 means that we won't add any changes specific for NVIDIA any more.

I'm sorry for any inconvenience that creates for you. If you have any complaints about it, please take it to NVIDIA to fix their driver or release it as open source or to do anything which would allow us to not have to workaround things.
Comment 118 Mahendra Tallur 2017-12-21 16:39:35 UTC
Hi Martin. Thanks a lot for the explanation. That's completely understandable. Thanks again so much for your work !
Comment 119 Thomas Lübking 2017-12-22 08:35:59 UTC
Given that thanks to QtQuick OpenGL is now everywhere, shipping a global environment snippet might be a good idea.
Otherwise see the initial posts on problems to setup libgl from kwin this way.

I'd have to check over xmas whether the nvidia blob still exposes this behavior.
Comment 120 Mahendra Tallur 2018-01-14 17:22:34 UTC
Hi again,

I would like to raise a different but likely connected issue :

on my setup, either setting USLEEP or triple buffering does fix the tearing issue. With either, performance is wonderful in 3D games (1060 GTX + i3 CPU).

However, what boggles me is the responsiveness and smoothness of kwin itself (I mean the desktop : resizing windows, raising menus etc.) is inconsistent. The same animation can be butter smooth then the next time appears jerky. It's actually always happened as long as I can remember when using kwin and Nvidia cards. 

It seems slightly better with triple buffer, but as I don't want to add input lag in games...

I kinda supposed it was a standard behaviour, but I noticed that even old machines with Intel HD Graphics would show a perfectly constant 60 FPS in kwin. I tried to remove my Nvidia and use the built-in Intel HD and the desktop felt perfect. It even seemed to feel smoother with the Nouveau driver.

I'm pretty sure all this is also related to the difficulty to work with the proprietary driver. As you devs are aware of the vsync issue, what's your opinion on the "desktop smoothness" issue ?

Thanks again :-)
Best regards & happy new year :)
Comment 121 Martin Flöser 2018-01-14 19:44:12 UTC
> As you devs are aware of the vsync issue, what's your
> opinion on the "desktop smoothness" issue ?
> 
Honestly, I stopped caring about nvidia specific problems years ago. To me Nvidia stopped to matter the day I switched my developer systems to Wayland. They run Intel as NVIDIA doesn't support gbm. Due to that even if I wanted to, I would not be able to install an Nvidia card and test issues.

Nowadays my opinion is that it is the choice of the consumers to decide whether they want nvidia with all that problems or not. They as consumers can bring nvidia to fix issues, but not we devs. Every crash in the Nvidia driver gets closer with please report to Nvidia. If all users do, it might change something.
Comment 122 Thomas Lübking 2018-01-14 19:49:36 UTC
Comment #120 sounds more like a client related issue anyway - resizing (QtQuick) GL contexts is PITA - at least on nvidia but OPenGL wasn't drafted for this behavior anyway.
Comment 123 Ryein Goddard 2018-01-14 20:10:00 UTC
It would be cool if we could dynamically load the nvidia binary 
driver(like nvidia prime but switching with the open and closed driver 
quickly) when ever we want so we can have the best of both worlds and 
not even worry about any of those things.  Screen tearing is probably 
the one thing I hate about KDE/Plasma right now compared with other 
solutions that for whatever reason do not have this issue.


On 01/14/2018 12:22 PM, Mahendra Tallur wrote:
> https://bugs.kde.org/show_bug.cgi?id=322060
>
> --- Comment #120 from Mahendra Tallur <mahen@free.fr> ---
> Hi again,
>
> I would like to raise a different but likely connected issue :
>
> on my setup, either setting USLEEP or triple buffering does fix the tearing
> issue. With either, performance is wonderful in 3D games (1060 GTX + i3 CPU).
>
> However, what boggles me is the responsiveness and smoothness of kwin itself (I
> mean the desktop : resizing windows, raising menus etc.) is inconsistent. The
> same animation can be butter smooth then the next time appears jerky. It's
> actually always happened as long as I can remember when using kwin and Nvidia
> cards.
>
> It seems slightly better with triple buffer, but as I don't want to add input
> lag in games...
>
> I kinda supposed it was a standard behaviour, but I noticed that even old
> machines with Intel HD Graphics would show a perfectly constant 60 FPS in kwin.
> I tried to remove my Nvidia and use the built-in Intel HD and the desktop felt
> perfect. It even seemed to feel smoother with the Nouveau driver.
>
> I'm pretty sure all this is also related to the difficulty to work with the
> proprietary driver. As you devs are aware of the vsync issue, what's your
> opinion on the "desktop smoothness" issue ?
>
> Thanks again :-)
> Best regards & happy new year :)
>
Comment 124 Thomas Lübking 2018-01-14 20:12:07 UTC
Impossible. nvidia and nouveau are incompatible on the kernel layer and act on the same HW. It's no way near the optimus condition.
Comment 125 Mahendra Tallur 2018-02-01 09:52:44 UTC
@Martin Flöser : thanks, again for your replies, your work, as well as your blog posts and technical decisions. This is most appreciated.

I will clearly stop bothering you about Nvidia. 

As a consumer, can anyone here tell me what is the state of the discussion with Nvidia, what are they aware of ? Did some users here get in touch with them through their forums or through their bugtracker, if any ? Any link so I could add my data to existing reports ? 

I cannot believe they are not at least a little concerned. It's crazy that, for instance, when opening any app while a video is playing underneath makes the video stutter, which doesn't open with an Intel HD Graphics. Or that desktop effects randomly stutter, etc.
Comment 126 Twisted Lucidity 2018-02-01 10:29:24 UTC
(In reply to Mahendra Tallur from comment #125)
> I cannot believe [nvidia] are not at least a little concerned.
I can well believe that nvidia are unconcerned, desktop GNU/Linux is going to be such a small share of their market that there probably isn't the profit in making things work and culturally nvidia appear to be hostile to F/OSS.

The correct "fix" is probably to swicth to AMD/Intel, which will basically be mandated anyway as more distros switch to Wayland. Which kinda sucks for those of us on nvidia, but life happens.

At the moment I find that keeping desktop effects enabled & using '__GL_YIELD="USLEEP"' makes the problems mostly go away.
Comment 127 Ryein Goddard 2018-02-01 15:00:33 UTC
Nvidia may not care specifically about KDE/Plasma to the extent you 
want, but saying it doesn't care is clearly wrong.  They do care about 
Linux.  In fact Nvidia knows its cards need to work well on Linux for ML 
and various other tasks.  I have over 100 steam games that work well on 
Linux and Nvidia has made fixes needed at times.


When I launch a game Plasma freezes for a moment and then all advanced 
graphics appear frozen.  My wallpaper even reverts back to some previous 
datetime's image.  Not saying this is for sure the KDE Communities' 
fault.  I am just saying we should clean up our own house before casting 
shade on another's.


I understand people getting pissed by things not working, but how you 
get things done is by providing paths for people to get involved and 
help.  If they disagree you don't just put your head in the sand, or 
give them the finger.  What you should do is talk about it and if Nvidia 
truly does something messed up let the community know, but be careful 
because the Linux zealots will be overly obtuse.


I am all for GNU/Linux, but I think we should all be pragmatic while 
sticking to our guns.


On 02/01/2018 05:29 AM, Twisted Lucidity wrote:
> https://bugs.kde.org/show_bug.cgi?id=322060
>
> --- Comment #126 from Twisted Lucidity <lucidlytwisted@gmail.com> ---
> (In reply to Mahendra Tallur from comment #125)
>> I cannot believe [nvidia] are not at least a little concerned.
> I can well believe that nvidia are unconcerned, desktop GNU/Linux is going to
> be such a small share of their market that there probably isn't the profit in
> making things work and culturally nvidia appear to be hostile to F/OSS.
>
> The correct "fix" is probably to swicth to AMD/Intel, which will basically be
> mandated anyway as more distros switch to Wayland. Which kinda sucks for those
> of us on nvidia, but life happens.
>
> At the moment I find that keeping desktop effects enabled & using
> '__GL_YIELD="USLEEP"' makes the problems mostly go away.
>
Comment 128 Thomas Lübking 2018-02-01 15:16:00 UTC
> Plasma freezes for a moment and then all advanced graphics appear frozen.
Sounds just as if steam blocks/disables the compositor?

> My wallpaper even reverts back to some previous datetime's image.
That I cannot explain at all.

> when opening any app while a video is playing underneath makes the video stutter
This is either by the opening animation (crucial?) and the video running at different FPS or (rather?) I/O related (in this case different "apps" will have different impact)

> which doesn't open with an Intel HD Graphics
Try to enforce the full composition pipeline (nvidia-settings, will draw more energy)
Comment 129 Ryein Goddard 2018-02-01 15:22:35 UTC
I don't think that is the case because it happens when using CUDA 
outside of games in some projects I work on.


On 02/01/2018 10:16 AM, Thomas Lübking wrote:
> https://bugs.kde.org/show_bug.cgi?id=322060
>
> --- Comment #128 from Thomas Lübking <thomas.luebking@gmail.com> ---
>> Plasma freezes for a moment and then all advanced graphics appear frozen.
> Sounds just as if steam blocks/disables the compositor?
>
>> My wallpaper even reverts back to some previous datetime's image.
> That I cannot explain at all.
>
>> when opening any app while a video is playing underneath makes the video stutter
> This is either by the opening animation (crucial?) and the video running at
> different FPS or (rather?) I/O related (in this case different "apps" will have
> different impact)
>
>> which doesn't open with an Intel HD Graphics
> Try to enforce the full composition pipeline (nvidia-settings, will draw more
> energy)
>
Comment 130 Nikos Chantziaras 2018-02-01 15:28:20 UTC
(In reply to Thomas Lübking from comment #128)
> > My wallpaper even reverts back to some previous datetime's image.
> That I cannot explain at all.

This is reproducible by simply suspending compositing. After an undetermined amount of time, the non-composited desktop will freeze its appearance and will not get updated anymore. New applications don't appear on the task manager bar, the system tray doesn't get updated, basically it's frozen to some state in the past.

Resuming compositing again makes it work correctly. Suspending compositing after that makes it revert to the same old frozen state. For example, if it got frozen and the time on the systray says "20:00", but the time now is 21:00, then suspending and resuming compositing makes it display 20:00 - 21:00 - 20:00 - 21:00, etc, as you suspend/resume compositing.
Comment 131 Ryein Goddard 2018-02-01 15:47:27 UTC
That sounds like exactly what I experience.  I guess the Nvidia driver 
is doing this?


On 02/01/2018 10:28 AM, Nikos Chantziaras wrote:
> https://bugs.kde.org/show_bug.cgi?id=322060
>
> --- Comment #130 from Nikos Chantziaras <realnc@gmail.com> ---
> (In reply to Thomas Lübking from comment #128)
>>> My wallpaper even reverts back to some previous datetime's image.
>> That I cannot explain at all.
> This is reproducible by simply suspending compositing. After an undetermined
> amount of time, the non-composited desktop will freeze its appearance and will
> not get updated anymore. New applications don't appear on the task manager bar,
> the system tray doesn't get updated, basically it's frozen to some state in the
> past.
>
> Resuming compositing again makes it work correctly. Suspending compositing
> after that makes it revert to the same old frozen state. For example, if it got
> frozen and the time on the systray says "20:00", but the time now is 21:00,
> then suspending and resuming compositing makes it display 20:00 - 21:00 - 20:00
> - 21:00, etc, as you suspend/resume compositing.
>
Comment 132 Thomas Lübking 2018-02-01 15:57:10 UTC
> and the time on the systray says "20:00", but the time now is 21:00
That's not what you said :-P

https://bugs.kde.org/show_bug.cgi?id=353983
Comment 133 Ryein Goddard 2018-02-01 16:01:26 UTC
huh?


On 02/01/2018 10:57 AM, Thomas Lübking wrote:
> https://bugs.kde.org/show_bug.cgi?id=322060
>
> --- Comment #132 from Thomas Lübking <thomas.luebking@gmail.com> ---
>> and the time on the systray says "20:00", but the time now is 21:00
> That's not what you said :-P
>
> https://bugs.kde.org/show_bug.cgi?id=353983
>
Comment 134 Nikos Chantziaras 2018-02-01 16:02:52 UTC
(In reply to Thomas Lübking from comment #132)
> > and the time on the systray says "20:00", but the time now is 21:00
> That's not what you said :-P
> 
> https://bugs.kde.org/show_bug.cgi?id=353983

I don't see any post from me on that bug :-P
Comment 135 Mahendra Tallur 2018-02-01 16:09:52 UTC
Regarding the Force(Full)CompositionPipeline VERSUS setting GL_YIELD to USLEEP. 
I used to use the latter and not the former because forcing the composition pipeline may slow down the system / induce more power consumption etc. or so I thought.

With any of the solutions, tearing is fixed. When using USLEEP I felt there was a performance impact on the desktop : for instance, it's obvious when moving a window to the edge of the screen to maximize it : it's jerky when using USLEEP with an nvidia card (but it's butter smooth with my Intel HD Graphics).

I figured out this animation is also smooth when using Force(full)compositionpipeline. I'm not 100% sure about the other effects, but it seems better. General desktop usage still seems slightly less smooth than with open source drivers, but it's better than when using USLEEP.

As for forcing the FULL composition pipeline or not, I don't know what actual difference it makes.

(sorry for adding more noise :-)
Comment 136 Thomas Lübking 2018-02-01 16:20:05 UTC
The full composition pipeline indirects rendering in the driver. It's implicit when eg. xrandr scaling the output or so.

As for the "not what you said" comment, I focused on the "old wallpaper" thing. I didn't read the description as "all of plasmashell freezes to an old buffer"
Comment 137 Mahendra Tallur 2018-02-01 16:27:28 UTC
I second Ryein & Nikos regarding the previous-buffer issue. Also I managed but that's offtopic to kill kwin by switching back and forth from a game that interrupted composition and the desktop. (that's probably another story and I'll check with the new plasma)

Thanks for taking the time to reply, Thomas. 

I see another issue when using Force(Full)CompositionPipeline : about one time out of two, I get a black desktop (no plasmoid / background) until the next reboot. I get the bottom panel and everything else works though. Also, even though the value is set in xorg.conf, it add an 15 seconds delay on a black screen (after X start / before plasma appears).
Comment 138 Mahendra Tallur 2018-02-01 16:29:46 UTC
Oops forgot this to my previous comment (when I get a black desktop in 50% of the cases when forcing the pipeline) : 

the desktop is black but when I move the mouse over it and right click, the menu is related to the items that were supposed to be there ; for instance, the properties of a specific desktop icon in a specific folder view...
Comment 139 Lastique 2018-02-01 16:55:08 UTC
(In reply to Mahendra Tallur from comment #137)
> 
> I see another issue when using Force(Full)CompositionPipeline : about one
> time out of two, I get a black desktop (no plasmoid / background) until the
> next reboot.

This may be a manifestation of the infamous "black textures" problem.

https://bugs.kde.org/show_bug.cgi?id=386752
https://devtalk.nvidia.com/default/topic/1026340/linux/black-or-incorrect-textures-in-kde
Comment 140 Lastique 2018-02-01 17:01:43 UTC
BTW, in that Nvidia thread, a driver developer says:

> The claim that __GL_YIELD=usleep is required points at an application bug, possibly a race condition due to missing synchronization.

So if anyone is fixing tearing by that option (i.e. the tearing is present without it and not present with it), then, according to Nvidia, the bug is in Kwin.

On my system though, __GL_YIELD=usleep has no influence on tearing, only on CPU consumption.
Comment 141 Thomas Lübking 2018-02-01 17:11:27 UTC
As pointed out in the original report, this has *never* been about vsync functionality per se.

The other yield methods (at that time, I dropped KDE for other reasons) caused ridiculous CPU load (when glSwapBuffers block, ie. w/ double buffered vsync).
So the code just checks for the environment to be set and disables vsync if it's not and the triple buffering is guessed (there is/was no way to query it) off.

Tuhs it would be good if one could ensure kwin to be loaded w/ this setting, but setting the environment in the process is too late.
Comment 142 Mahendra Tallur 2018-02-03 21:03:04 UTC
As suggested, I finally created a thread on the NVIDIA forum where an NVIDIA dev frequently replies... A message in a bottle !
Comment 143 Mahendra Tallur 2018-02-03 21:03:22 UTC
As suggested, I finally created a thread on the NVIDIA forum where an NVIDIA dev frequently replies... A message in a bottle !

https://devtalk.nvidia.com/default/topic/1029568/linux/the-situation-on-kde-kwin-plasma-performance/
Comment 144 Nikos Chantziaras 2018-02-04 14:15:11 UTC
(In reply to Mahendra Tallur from comment #143)
> As suggested, I finally created a thread on the NVIDIA forum where an NVIDIA
> dev frequently replies... A message in a bottle !
> 
> https://devtalk.nvidia.com/default/topic/1029568/linux/the-situation-on-kde-
> kwin-plasma-performance/

That link doesn't seem to work. I can't see any such thread there.
Comment 145 Mahendra Tallur 2018-02-04 14:38:18 UTC
It does exist, for some reason it seems to be hidden. Maybe NVIDIA has to approve of it first. Sorry for the inconvenience. It seems many other users have the same issue on this forum.
Comment 146 Mahendra Tallur 2018-02-08 21:03:11 UTC
@Nikos : the link is available now : https://devtalk.nvidia.com/default/topic/1029568/linux/the-situation-on-kde-kwin-plasma-performance/
Comment 147 Peter Eszlari 2018-05-12 19:08:02 UTC
What I don't get about this long-standing bug is, that under GNOME everything works out of the box - no tearing. And without having to mess with any configuration. This suggest to me that the bug is somewhere in kwin.
Comment 148 Martin Flöser 2018-05-13 15:58:56 UTC
(In reply to Peter Eszlari from comment #147)
> What I don't get about this long-standing bug is, that under GNOME
> everything works out of the box - no tearing

Feel free to use GNOME if it gives the better experience for you. Unfortunately it is not possible to draw any conclusions from the fact that it works for you on GNOME.
Comment 149 Thomas Lübking 2018-05-13 18:08:41 UTC
Or maybe just read https://bugs.kde.org/show_bug.cgi?id=322060#c141 about the nature of the "bug" and the state of the resolution ...
Comment 150 Mahendra Tallur 2018-05-15 09:42:57 UTC
|OT : to users]

Hi ! I'm sorry for adding noise, I would just like to express a piece of advice to *users* like me. As it's a longstanding problem affecting many people.

1) there are technical considerations that we cannot grasp ; but numerous efforts were put in the past, and the very nature of the Nvidia drivers makes it difficult to solve this problem. Workaround were attempted in the past with no satisfying result.

2) You can get a semi-acceptable state by applying work-around but it's never that great. (disabling automatic composition interruption ; enabling triple buffering)

3) it's not that great either under Gnome. You still get sub-optimal performance on the desktop (I tried & compared Gnome Shell performance with an Intel HD) ; you still have to apply tweaks for tearing in some apps (Totem, browsers for instance). It's acceptable, but that's also a compromise...

4) believe me, the difference in terms of general usability is so huge, it's worth downgrading and switching to a different GPU vendor, if you're not a big gamer. I settled with a very slow and cheap AMD RX550. Gaming is OK but desktop performance is stellar (as it is with Intel HD drivers). No more tearing, no more KDE panel crash, no workaround. You also benefit from : open source drivers, Wayland session, Lakka now works and eventually from the realtime-kwin ;-), no tweak, constant 60 FPS desktop...
Comment 151 Peter Eszlari 2018-05-15 10:55:32 UTC
(In reply to Martin Flöser from comment #148)
> Feel free to use GNOME if it gives the better experience for you.
> Unfortunately it is not possible to draw any conclusions from the fact that
> it works for you on GNOME.

But than I would have deal with a crippled desktop environment. What I will do instead is, I will buy an AMD card. I just don't think such a out of the box experience for owners of Nvidia cards is good for KDE.
Comment 152 Martin Flöser 2018-05-15 15:44:47 UTC
(In reply to Peter Eszlari from comment #151)
> (In reply to Martin Flöser from comment #148)
> > Feel free to use GNOME if it gives the better experience for you.
> > Unfortunately it is not possible to draw any conclusions from the fact that
> > it works for you on GNOME.
> 
> But than I would have deal with a crippled desktop environment. What I will
> do instead is, I will buy an AMD card. I just don't think such a out of the
> box experience for owners of Nvidia cards is good for KDE.

Please tell Nvidia that you buy a card of a different vendor because their driver sucks. Nvidia needs to know that it is harming their business. We cannot and do not want to fix Nvidia's driver issues.
Comment 153 Peter Eszlari 2018-05-15 18:14:00 UTC
(In reply to Martin Flöser from comment #152)
> Nvidia needs to know that it is harming their business.

I think they won't care much, because the Linux customers that Nvidia cares about are running Redhat Enterprise with GNOME.
Comment 154 Peter Eszlari 2019-03-25 18:43:10 UTC
I guess this bug can be closed now:

https://phabricator.kde.org/D19867
Comment 155 Martin Flöser 2019-03-25 20:28:25 UTC
Marking as fixed per latest comment
Comment 156 Nikos Chantziaras 2019-05-21 19:02:06 UTC
For anyone coming here through a web search, there's a KWin fork that fixed all issues for me:

https://github.com/tildearrow/kwin-lowlatency

No more frame skipping or stutter, no more lag, works great with modern, better-than-60Hz displays.
Comment 157 Greg C 2019-06-03 15:10:14 UTC
(In reply to Peter Eszlari from comment #154)
> I guess this bug can be closed now:
> 
> https://phabricator.kde.org/D19867

Good day, how can i apply this resolution? what is the resolution
Comment 158 Stephan Karacson 2019-06-03 19:48:40 UTC
I believe it works now without tripleBuffer quite well on kwin 5.15.5 on Gentoo-GNU/Linux, just to dump a version number here.