SUMMARY My laptop is an Intel+NVIDIA optimus one, which has an HDMI port on Intel GPU, and an mini DisplayPort on NVIDIA. If the external monitor is connected to the NVIDIA miniDP port, though both internal and external monitor are detected correctly and both show the desktop, whole desktop renders pretty low fps, and kwin_wayland process consumes nearly 100% CPU of a single core. However, if the image on external monitor stays static, kwin_wayland stays calm and the internal monitor is smooth as usual. If I set environment variable `KWIN_DRM_DEVICES=/dev/dri/card0` (NVIDIA GPU determined from /dev/dri/by-path and lspci), kwin can drive the external monitor even if connected to NVIDIA miniDP port, but the internal monitor is not detected. Moreover, if I set `KWIN_DRM_DEVICES=/dev/dri/card0:/dev/dri/card1` (NVIDIA followed by Intel), same syndrome occurs just like when KWIN_DRM_DEVICES is absent. STEPS TO REPRODUCE 1. Boot into KDE desktop without external monitor 2. Connect external monitor to NVIDIA GPU 3. Logout and set KWIN_DRM_DEVICES=/dev/dri/card0 in /etc/environment (from tty or ssh) 4. Login again with the external monitor still on NVIDIA GPU OBSERVED RESULT After step 2, both monitor works, but desktop is low fps and kwin_wayland process consumes high CPU usage; after stop 4, desktop on external monitor works smoothly, but internal monitor is not detected. EXPECTED RESULT No matter which GPU is default, or which port external monitors connect to, desktop should be smooth. SOFTWARE/OS VERSIONS Operating System: Arch Linux KDE Plasma Version: 5.24.4 KDE Frameworks Version: 5.92.0 Qt Version: 5.15.3 Kernel Version: 5.17.1-zen1-1-zen (64-bit) Graphics Platform: Wayland Processors: 8 × Intel® Core™ i7-6700HQ CPU @ 2.60GHz Memory: 15.5 GiB of RAM Graphics Processor: NVIDIA GeForce GTX 965M/PCIe/SSE2 Graphics Processor: Intel HD Graphics 530
I can confirm this issue as well, with an MSI GS63VR 7RF (i7-7700HQ + Nvidia 1060), on NixOS unstable. Interesting enough, in my case, the internal screen stays smooth, and only the external monitor gets low performance and high CPU usage. I can confirm it quantitatively by running glxgears. If the window is on the internal screen, I get ~60FPS (vsync'd). If I move the glxgears window to the external, nvidia-connected screen, it drops to ~32-36 FPS, and even the cursor gets choppy. Also, sometimes when connecting external monitor after plasma startup, something gets weird, wallpaper doesn't show up, but I can move the mouse and windows to the external screen. My current workaround is a thunderbolt3 port with a thunderbolt->HDMI adapter, which is detected as eDP, runs on iGPU, and glxgears reports 75FPS (75hz monitor) and a smooth desktop.
This also seems related to #450110 , although that mentions X11 only.
Git commit 68a54a67b88025c1c0679ffe1658222f61b0cc81 by Xaver Hugl. Committed on 26/04/2022 at 15:03. Pushed by zamundaaa into branch 'master'. backends/drm: enable format modifiers by default Format modifiers enable the graphics hardware to be much more efficient, especially when it comes to multi-gpu transfers. With the issues regarding bandwidth limits now solved, enable them by default to make all supported systems benefit from them. Related: bug 452397 M +1 -1 src/backends/drm/egl_gbm_layer_surface.cpp https://invent.kde.org/plasma/kwin/commit/68a54a67b88025c1c0679ffe1658222f61b0cc81
KWIN_DRM_USE_MODIFIERS gates the use of those modifiers, but does not seem to fix the problem. I am on tag v5.25.5 (via Fedora 36) and looking at https://invent.kde.org/plasma/kwin/-/blob/v5.24.5/src/backends/drm/egl_gbm_backend.cpp#L164 suggests that `export KWIN_DRM_USE_MODIFIERS=1` would force-enable use of modifiers, even in the presence of an Nvidia GPU. I have set KWIN_DRM_USE_MODIFIERS=1 (in an .../env startup script) and do not see any change in behaviour in my scenario.
To clarify - I am trying to say that KWIN_DRM_USE_MODIFIERS (on 5.24.5) would have the same effect as the commit. Because KWIN_DRM_USE_MODIFIERS has no effect, I am concerned that the commit might not fix the real problem.
Created attachment 149489 [details] Screenshot of massive CPU load
Let me provide some details, observations, and confirmations which may be helpful (based on v5.24.5 / Fedora 36): * Wayland * Notebook with Optimus setup - Intel iGPU + Nvidia dGPU * Intel GPU has (exclusive) control over the internal display and the HDMI port ("connector") * Nvidia GPU has (exclusive) control over the USB-C output path (i.e. DisplayPort Alternate Mode, Thunderbolt Alternative Mode) ("connector") * Intel is the default (boot) GPU (and cannot be changed) Log into an empty desktop and attach a 4K external screen to USB-C (which ends up in a DisplayPort bitstream). The Nvidia GPU remains in "PRIME offload" mode, but obviously needs the data for presenting. Any rendering of data strictly towards the Intel-controlled connector remains smooth. Any rendering towards the Nvidia-controlled connector results in * an extremely jerky mouse cursor, everything is laggy * massive CPU load from the kwin_wayland process in response to anything that requires drawing, even just moving the mouse cursor around It *feels* almost as if the complete content associated with the 4K screen at large is transferred even, just for moving the mouse cursor. Running 'perf trace' suggests that an enormous amount of time is spent on memmove / memcpy (via glibc optimized functions); the attached screenshot shows that (and right now I do not know how to produce better diagnostic data) I tried installing debug symbols via the Fedora service; this got lines resolved in glibc (the memcpy), but nothing for the KDE process (qt_metacast smells like dynamic dispatch, so I am not all that surprised). I turned on DRM logs (https://invent.kde.org/plasma/kwin/-/wikis/Debugging-DRM-issues); there was surprisingly few data, but not being familiar with the domain, I do not know what to look for in those logs. I can also confirm the observation that staying on the Intel GPU creates a smooth experience: Disconnect the external screen from DisplayPort, attach it to HDMI (Intel connector) - everything is smooth. In closing, this is the Nvidia 510 driver, the most current driver available for Fedora 36 from rpmfusion at the time of this writing.
https://bugs.kde.org/show_bug.cgi?id=409040 smells as if it may apply, here, too (and the attached MR https://invent.kde.org/plasma/kwin/-/merge_requests/1861 continues to be open, but a comment there suggests that the Nvidia 515 driver might improve matters?) If timestamps are not accurate, and if kwin then does too much work, then this might end up in consuming by far too much CPU. Someone with domain expertise probably will shoot this down quickly :)
No (observable) change in behaviour for the current Nvidia 515 driver.
Created attachment 150302 [details] My KDE log (default setting + KWIN_DRM_DEVICES to nvidia) My PC has a similar setup to a typical I+N laptop. I set Intel card as primary GPU (connected to one monitor) and use NVIDIA card to connect a second monitor. I observe the very same thing. Basically: 1. By default, kwin_wayland uses Intel card. Both monitors show picture. But the "NVIDIA screen" runs terribly slow. 2. If I set `KWIN_DRM_DEVICES=/dev/dri/card0` (the NVIDIA card), or set NVIDIA card as the primary GPU, then only the NVIDIA screen shows picture. I attached relevant log. The driver is NVIDIA-open 515.48.07, and plasma is 5.25.2.
Created attachment 151088 [details] Sample video of kwin framerate issue I'm not sure if I should be filing this under a new bug, but I'm seeing the exact behaviour with an AMD + AMD laptop, with two notable differences: 1. This only seems to happen under X11, and is not present under Wayland, and 2. This seems to only occur (for me) when using a larger resolution (2K or 4K) screen, no matter what resolution it's actually set to. See attached video, where the left side is the external display connected to HDMI (which is wired directly to the dGPU), ad the right side is the internal display (wired to the iGPU). GPU mode is set to 'hybrid' where it prefers the integrated chip. Other notes: - If something is running using DRI_PRIME (e.g. a game), it seems to be unaffected. - Some applications and not others are affected (bad ones include plasmashell and Steam, good ones are Dolphin and Firefox). - Manipulation of windows (such as dragging them around) seems completely unaffected. Software: - Fedora 36 - Kernel 5.18.13-200.fc36.x86_64 - kwin/KDE 5.25.3 (rel 3.fc36) - libdrm 2.4.110 - mesa-dri-drivers 22.1.4
I'm seeing this problem since an update yesterday (openSUSE RPMs), which makes me wonder why I haven't been bitten by it before. Perhaps a default setting was changed. System setup is a laptop (2021 HP ZBook Fury) with Intel+Nvidia graphics in "hybrid" mode (BIOS setting). I'm using Intel as the primary, so the external screens connected via a Thunderbolt dock go via the Nvidia path. I have Plasma 5.25.4 and Nvidia G06 515.65. I have the same symptoms as others. The kwin_wayland process is now always using the CPU. Just moving the mouse around causes a large increase in CPU usage. Screen updates are very slow, and to my eye it looks like single digit FPS. I note that glxgears, while slower on the Nvidia screen, still reports a fairly high FPS. I suspect that the problem lies solely with the transfer between the graphics cards, and that the application doesn't see this. It reports around 600FPS in glxgears, regardless of whether I use DRI_PRIME=1 or not, but as mentioned above, it does not look like that. Both variations use a similarly high amount of CPU. Possibly related: with this last update, auto-blanking (after a timeout) has stopped working, as has the wobbly windows effect.
Minor update: I changed the BIOS setting to "discrete", ie: use Nvidia only and ignore the Intel graphics card. Now the lag has gone, the kwin_wayland CPU usage is down to a negligable amount, and the mouse is smooth across all screens. Glxgears runs at around 60FPS (the screen refresh rate), and my windows are wobbly again as they should be. As this was working (for me) until yesterday, I suspect we have a regression in the handling of the passthrough to the second graphics card.
*** Bug 459692 has been marked as a duplicate of this bug. ***
Tested on plasma 5.26 + kernel 6.0 + nvidia-open 520.56.06. Still saw the same issue. Is this related to DMABUF support on NVIDIA card? https://github.com/NVIDIA/open-gpu-kernel-modules/discussions/243
A possibly relevant merge request was started @ https://invent.kde.org/plasma/kwin/-/merge_requests/3254
*** Bug 461172 has been marked as a duplicate of this bug. ***
*** Bug 462759 has been marked as a duplicate of this bug. ***
*** Bug 457735 has been marked as a duplicate of this bug. ***
I ran into these issues (low fps on external monitor and high CPU usage when moving mouse under wayland) on my Nvidia laptop (fedora 37, KDE 5.26, Nvidia drivers v525.78, linux 6.1, i5-7300hq + GTX 1060) and after git-bisecting through different commits I landed on the aforementioned 68a54a67 "backends/drm: enable format modifiers by default" commit. When I added "KWIN_DRM_USE_MODIFIERS=0" to my /etc/environment file the lag actually did go away. (Although kwin_wayland's CPU usage does still increase up to 40% when I move my mouse on the external monitor).
(In reply to Mathias Johansson from comment #20) > I ran into these issues (low fps on external monitor and high CPU usage when > moving mouse under wayland) on my Nvidia laptop (fedora 37, KDE 5.26, Nvidia > drivers v525.78, linux 6.1, i5-7300hq + GTX 1060) and after git-bisecting > through different commits I landed on the aforementioned 68a54a67 > "backends/drm: enable format modifiers by default" commit. When I added > "KWIN_DRM_USE_MODIFIERS=0" to my /etc/environment file the lag actually did > go away. (Although kwin_wayland's CPU usage does still increase up to 40% > when I move my mouse on the external monitor). I tried this on my laptop -- Arch, latest everything, 8 i9's plus NVIDIA 3060. No change at all.
*** Bug 409040 has been marked as a duplicate of this bug. ***
*** Bug 467318 has been marked as a duplicate of this bug. ***
I am expiriencing this on AMD (i915 + amdgpu) as an eGPU. Display connected to it is running at half the refresh rate even though it is set to 60 in the settings. When I run glxgears and force the Vsync off it somehow get unlocked - the glxgears reports much higher fps, but the whole screen feels still like 30hz. The issue persists when using GNOME. So it is wayland only for me regardless of what compositor or DE I am using. So it most probably is something in the GBM backend right since GNOME and Plasma uses the same thing.
IIUC, NVIDIA have posted MRs on swaywm and wlroots which touch the rough area of the specific problem reported here: * https://github.com/swaywm/sway/pull/7509 * https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/4055 My hardware matches almost exactly what the MR description says: "On most laptops the dGPU does not drive the integrated display, but drives external displays through the HDMI port on the sides/back of the laptop. Plugging in an external display and fullscreening an application on it is what this MR helps." Well, for me it's the USB-C out == Thunderbolt == effectively DisplayPort-alternate where the dGPU lives its life. Perhaps those MR provide some inspiration for kwin.
KWin already supports what these MRs implement since 5.22. Once the NVidia driver supports dmabuf feedback and direct scanout, it will also work on KWin. It'll only help with fullscreen though.
*** Bug 467815 has been marked as a duplicate of this bug. ***
Git commit b14f7959eb5f4d2b690ac26fdfee76abc837240c by Xaver Hugl. Committed on 12/04/2023 at 13:28. Pushed by zamundaaa into branch 'master'. backends/drm: add another multi gpu fallback With the dmabuf multi-gpu path, a buffer is imported to the secondary GPU and presented directly, but importing a buffer that's usable for scanout is not possible that way on most hardware. To prevent CPU copy from being needed in those cases, this commit introduces a fallback where the buffer is imported for rendering only, and then copied to a local buffer that's presented on the screen. Related: bug 465809 M +73 -43 src/backends/drm/drm_egl_backend.cpp M +5 -3 src/backends/drm/drm_egl_backend.h M +92 -15 src/backends/drm/drm_egl_layer_surface.cpp M +7 -3 src/backends/drm/drm_egl_layer_surface.h M +5 -0 src/composite.cpp M +1 -2 src/libkwineffects/kwinglutils.h M +18 -0 src/platformsupport/scenes/opengl/eglcontext.cpp M +7 -3 src/platformsupport/scenes/opengl/eglcontext.h https://invent.kde.org/plasma/kwin/commit/b14f7959eb5f4d2b690ac26fdfee76abc837240c
This commit should fix it for Intel/AMD+Intel/AMD systems, and might also work with nouveau. Support for the proprietary NVidia driver is being worked on
(In reply to Zamundaaa from comment #29) > This commit should fix it for Intel/AMD+Intel/AMD systems, and might also > work with nouveau. Support for the proprietary NVidia driver is being worked > on I can confirm that it fixed my external display connected to Razer Core X with AMD Radeon RX 6600 XT. Running smoothly at 60fps. Tested with glxgears on wayland.
*** Bug 468583 has been marked as a duplicate of this bug. ***
A possibly relevant merge request was started @ https://invent.kde.org/plasma/kwin/-/merge_requests/4177
Git commit d8e57f78863b76ed5945e7216d6dbe19c9e14cc8 by Xaver Hugl. Committed on 20/06/2023 at 07:59. Pushed by zamundaaa into branch 'master'. backends/drm: improve multi gpu performance with NVidia as secondary GPU With the Nvidia driver, linear textures are external_only, so additional measures need to be taken to make the egl import path work M +12 -0 src/backends/drm/drm_egl_backend.cpp M +3 -0 src/backends/drm/drm_egl_backend.h M +17 -6 src/backends/drm/drm_egl_layer_surface.cpp M +8 -7 src/libkwineffects/kwineglimagetexture.cpp M +2 -2 src/libkwineffects/kwineglimagetexture.h M +5 -5 src/libkwineffects/kwingltexture.cpp M +1 -1 src/libkwineffects/kwingltexture.h M +17 -14 src/libkwineffects/kwinglutils.cpp M +1 -0 src/libkwineffects/kwinglutils.h M +1 -0 src/platformsupport/scenes/opengl/CMakeLists.txt M +10 -7 src/platformsupport/scenes/opengl/eglcontext.cpp M +1 -1 src/platformsupport/scenes/opengl/eglcontext.h M +25 -6 src/platformsupport/scenes/opengl/egldisplay.cpp M +13 -1 src/platformsupport/scenes/opengl/egldisplay.h A +66 -0 src/platformsupport/scenes/opengl/eglnativefence.cpp [License: GPL(v2.0+)] A +38 -0 src/platformsupport/scenes/opengl/eglnativefence.h [License: GPL(v2.0+)] M +0 -1 src/plugins/screencast/CMakeLists.txt D +0 -50 src/plugins/screencast/eglnativefence.cpp D +0 -33 src/plugins/screencast/eglnativefence.h M +2 -2 src/plugins/screencast/screencaststream.cpp M +8 -0 src/utils/filedescriptor.cpp M +1 -0 src/utils/filedescriptor.h https://invent.kde.org/plasma/kwin/-/commit/d8e57f78863b76ed5945e7216d6dbe19c9e14cc8
This commit should improve performance, even if it's not ideal yet. Once NVidia implements EGL_ANDROID_native_fence_sync (which I'm told should happen this year) performance should be on par with other drivers
Will the aforementioned merge requests (assuming they are merged) be part of a 5.27.x patch release or will everybody have to wait for Plasma 6 to get these fixes?
They're all Plasma 6 only. Backporting them is technically possible, but the risk of regressions is too high for a bugfix release
*** Bug 473197 has been marked as a duplicate of this bug. ***
Using latest Neon Unstable, glxgears tops at 157 fps, but usually it reports around 84-120 fps average. I use two screens, 1080p@144Hz and 1440p@165Hz There is a constant 50% load on a single CPU thread by kwin_wayland. Operating System: KDE neon Unstable Edition KDE Plasma Version: 5.27.80 KDE Frameworks Version: 5.240.0 Qt Version: 6.6.0 Kernel Version: 6.2.0-32-generic (64-bit) Graphics Platform: Wayland Processors: 16 × AMD Ryzen 7 4800H with Radeon Graphics Memory: 62.2 GiB of RAM Graphics Processor: AMD Radeon Graphics Manufacturer: LENOVO Product Name: 82B1 System Version: Lenovo Legion 5 15ARH05H RXT 2060 Mobile
(In reply to petrk from comment #38) > Using latest Neon Unstable, glxgears tops at 157 fps, but usually it reports > around 84-120 fps average. > I use two screens, 1080p@144Hz and 1440p@165Hz > > There is a constant 50% load on a single CPU thread by kwin_wayland. Can confirm. Checking with hotspot with glxgears running on an external monitor, 70-80% of CPU cycles are spent in eglMakeCurrent in the NVidia driver when we stop using the NVidia context. This will need to be debugged by someone with access to the NVidia driver code
Does the problem go away if you set __GL_HWSTATE_PER_CTX=2?
No changes with that variable. I noticed something interesting, checking power draw via nvidia-smi on X11 session running external screen I get power ramps to 20W+ while moving windows around, on Idle I get around 4W. Now on Wayland session I can't get Nvidia to bump above 5W. On idle it stays at 2W. Not does that it that Wayland session renders on IGP for all outputs by default? Or am I misunderstanding things from looking at a glance?
Does the problem go away with the new 545 beta driver? https://www.nvidia.com/download/driverResults.aspx/212964/en-us/
On Arch with 545.23.06 no changes, X11 is flawless while Wayland struggles. On Neon Unstable with 545.23.06 X11 is flawless as well, Wayland struggles, plus kscreen backend has issues loading in systemsettings and nvidia-settings segfaults.
*** Bug 476769 has been marked as a duplicate of this bug. ***
(In reply to Neal Gompa from comment #42) > Does the problem go away with the new 545 beta driver? > https://www.nvidia.com/download/driverResults.aspx/212964/en-us/ sadly not
*** Bug 477987 has been marked as a duplicate of this bug. ***
This problem seems to have gotten worse with the beta for me. Kwin consumes between 50% and 100% of one core and I have some freezes
Yes. That is a NVidia driver bug, it busy waits on OpenGL context switches and uses more CPU that way than actually copying the buffer would take. I've been told it should fixed in a driver release coming out in January or February
Can you try setting __GL_HWSTATE_PER_CTX=1?
(In reply to Arthur Huillet from comment #49) > Can you try setting __GL_HWSTATE_PER_CTX=1? Sorry, I notice I had asked a similar question back in October already.
Hello all, I wanted to share a small discovery related to this, a bit related to a post I made a while back on reddit: https://www.reddit.com/r/kde/comments/11detap/psa_on_nvidia_optimushybrid_laptops_using/ I've found that If I disable my integrated graphics from boot with the following configuration under /etc/modprobe.d/10-blacklist-amdgpu.conf: blacklist amdgpu The performance on the external display is fine but my laptop screen doesn't work, only the external monitor. However, if I add the module **after** I got into plasma with: sudo modprobe -a amdgpu I get video on both the external display and my laptop screen, and the framedrops are gone! The CPU utilization is still high though.
NVidia driver 550 has been released as a beta and should fix this properly with Plasma 6.0
(In reply to Zamundaaa from comment #52) > NVidia driver 550 has been released as a beta and should fix this properly > with Plasma 6.0 Just tested the driver in plasma 6 in Arch, while the performance did improve a bit while testing on testufo.com still pretty bad. I noticed that placing the window in both screens (being shown on both the laptop screen and external monitor), it does make the framerate increase. I can make a video later to properly demonstrate.
With 6.0 RC2 performance is still not on par with X11, glxgears doesn't quite reach screen refresh framerate. It's around 150fps on a 165Hz screen, it may be closer than where it was, but not on there yet. Nvidia 550.40.07 on 4060Ti. Plus there appears to be delay between typing and text appearing on screen, which may be unrelated.
NVIDIA 550, `OGL_DEDICATED_HW_STATE_PER_CONTEXT=ENABLE_ROBUST` envvar fixes the issue
(In reply to Dmitrii Chermnykh from comment #55) > NVIDIA 550, `OGL_DEDICATED_HW_STATE_PER_CONTEXT=ENABLE_ROBUST` envvar fixes > the issue I tried setting the envvar in /etc/environment to no effect. My Specs are: Asus Zephyrus G15 AMD Ryzen 9 5900HS with Radeon Graphics RTX 3080 Monitor plugged in the nvidia card outputs.
A possibly relevant merge request was started @ https://invent.kde.org/plasma/kwin/-/merge_requests/5115
Git commit 1c8bd1be626d2c2453f53e8c67d5f15489c75835 by Xaver Hugl. Committed on 05/02/2024 at 21:46. Pushed by zamundaaa into branch 'master'. backends/drm: use explicit sync where possible Instead of calling glFinish, which blocks until it's done and has high CPU usage on NVidia, use EGL_ANDROID_native_fence_fd to get an explicit sync fd, which the commit thread automatically waits on before committing the buffer to KMS. M +10 -7 src/backends/drm/drm_buffer.cpp M +1 -1 src/backends/drm/drm_buffer.h M +1 -1 src/backends/drm/drm_egl_layer.cpp M +17 -17 src/backends/drm/drm_egl_layer_surface.cpp M +3 -2 src/backends/drm/drm_egl_layer_surface.h M +2 -2 src/backends/drm/drm_gpu.cpp M +1 -1 src/backends/drm/drm_gpu.h M +3 -3 src/backends/drm/drm_qpainter_layer.cpp https://invent.kde.org/plasma/kwin/-/commit/1c8bd1be626d2c2453f53e8c67d5f15489c75835
A possibly relevant merge request was started @ https://invent.kde.org/plasma/kwin/-/merge_requests/5116
Git commit f94e4d16dda294fd11b7d68503daccc33ab3ff9b by Xaver Hugl. Committed on 05/02/2024 at 22:19. Pushed by zamundaaa into branch 'Plasma/6.0'. backends/drm: use explicit sync where possible Instead of calling glFinish, which blocks until it's done and has high CPU usage on NVidia, use EGL_ANDROID_native_fence_fd to get an explicit sync fd, which the commit thread automatically waits on before committing the buffer to KMS. (cherry picked from commit 1c8bd1be626d2c2453f53e8c67d5f15489c75835) M +10 -7 src/backends/drm/drm_buffer.cpp M +1 -1 src/backends/drm/drm_buffer.h M +1 -1 src/backends/drm/drm_egl_layer.cpp M +17 -17 src/backends/drm/drm_egl_layer_surface.cpp M +3 -2 src/backends/drm/drm_egl_layer_surface.h M +2 -2 src/backends/drm/drm_gpu.cpp M +1 -1 src/backends/drm/drm_gpu.h M +3 -3 src/backends/drm/drm_qpainter_layer.cpp https://invent.kde.org/plasma/kwin/-/commit/f94e4d16dda294fd11b7d68503daccc33ab3ff9b
*** Bug 481554 has been marked as a duplicate of this bug. ***
My laptop is also an Intel+NVIDIA one, which has an HDMI port. I installed the nvidia 550 driver and I started experiencing lags on external display like this on the X server. Strong FPS drops began in Wayland. On the nouveau Wayland works well, on the X server the artifacts are saved as in the video. On built-in display X server and Wayland work fine on any driver. On external display it works fine only on Wayland and only on nouveau. I don't have such issues on GNOME but I don't remember which drived I used Video link https://drive.google.com/file/d/1HVsZkVo4xsVL_vm6AMmIhuif-lzvf205 (Speed 0.25) Plasma 6.0.0, neon, rtx3060, i7-12700H, 4K external display Laptop model is Aero 5 KE4
this issue is about low fps on wayland. please file a new bug report
Created attachment 167220 [details] CPU Load increase depending on screen glxgears is displayed on
Created attachment 167221 [details] glxgears framerate dropping on external monitor connected to nvidia GPU
I'm still having issues. Latest Nvidia driver (550.54.14-5) KDE Plasma 6.0.2 Arch Linux 6.7.9-arch1-1 Wayland session AMD Ryzen 7 4800H + its iGPU running Kwin GTX 1660 Ti Laptop (Turing) Laptop screen connected to AMD iGPU and DisplayPort monitor running at 1080P 240Hz (connected to Nvidia GPU). Glxgears only runs at around 80 FPS. In attachments 167220 and 167221 you can see CPU load increasing and glxgears framerate dropping depending on which monitor it is being displayed on. Mouse cursor movement is also not 240 Hz, or at least there are issues with frame pacing depending on what is being done on the monitor, I haven't been able to narrow it down well.
Can anyone report that this issue is actually fixed? It is not on my end.
There are no traces of this issue being fixed, as such, I will reopen it.
System Configuration: ASUS TUF A15 FA506II Notebook CPU: AMD Ryzen 7 4800H (Has iGPU and laptop screen is connected to this) dGPU: NVIDIA GTX1650 Ti (Using proprietary 550.67 driver, has type-c output port) 144 Hz External Monitor (connected via display port to type-c cable) Distro: openSUSE Tumbleweed 20240321 DE: KDE Plasma 6.0.2 Wayland Kernel: 6.8.1-1-default kwin is running on iGPU Observations: (all the tests are done on 144 hz external monitor) - Firefox on Wayland gets 144 fps on testufo.com but it drops to 72 fps when i shake the mouse. Also while testufo is running, when i open application launcher fps drops to 72 fps. - Brave on xwayland gets ~65 fps on testufo.com and shaking mouse has no effect. - Brave on Wayland (--ozone-platform-hint=auto) gets ~76 fps on testufo.com but it drops to 72 fps when i shake the mouse. - glxgears gets 144 fps and it drops to ~141 fps when shaking the mouse. - When i activate "Show FPS" desktop effect, current fps is ~195 maximum fps is showing 143 and when i shake the mouse, current fps drops to ~160 fps The Plasma 6 and NVIDIA 550 updates are improved performance a lot, now I can daily driver wayland on both home and work laptop (at work i have 75 hz external monitor and it was getting ~37 fps which was not usable). Thanks to all developers for their efforts.
Definitely not fixed. HP Omen with Optimus Graphics: Device-1: Intel Alder Lake-P GT2 [Iris Xe Graphics] driver: i915 v: kernel Device-2: NVIDIA GA103M [GeForce RTX 3080 Ti Laptop GPU] driver: nvidia v: 550.67 I installed fresh Endeavouros with KDE Plasma 6 with 165Hz monitor. I can definitely feel lags and low FPS. testufo.com shows 35 FPS. I can log out and switch to X11 -> testufo.com shows 165Hz.
*** Bug 483038 has been marked as a duplicate of this bug. ***
*** Bug 485254 has been marked as a duplicate of this bug. ***
I think te recent influx of reports of this bug are simply caused by the lack of explicit sync (linux-drm-syncobj), for which there is an open merge request on the Kwin side and Nvidia proprietary driver side. Is this correct?
Getting more than half framerate using the following line in /etc/environment: KWIN_DRM_DEVICES=/dev/dri/card1:/dev/dri/card0 Just thought I'd share for anyone going through the same thing