Bug 424408 - Multiple coredumps with every login/logout
Summary: Multiple coredumps with every login/logout
Status: RESOLVED FIXED
Alias: None
Product: plasmashell
Classification: Plasma
Component: general (show other bugs)
Version: 5.19.3
Platform: openSUSE Linux
: VHI crash
Target Milestone: 1.0
Assignee: David Edmundson
URL:
Keywords:
: 392556 393352 399175 409448 421436 422322 424488 424931 432216 (view as bug list)
Depends on:
Blocks:
 
Reported: 2020-07-19 12:20 UTC by BingMyBong
Modified: 2021-03-10 19:10 UTC (History)
22 users (show)

See Also:
Latest Commit:
Version Fixed In: 5.77


Attachments
backtrace file (14.31 KB, text/plain)
2020-07-24 14:39 UTC, BingMyBong
Details
backtrace file plasmashell 1 (13.87 KB, text/plain)
2020-07-24 14:39 UTC, BingMyBong
Details
backtrace file plasmashell 2 (16.90 KB, text/plain)
2020-07-24 14:40 UTC, BingMyBong
Details
unpatched login/logout session (445.05 KB, text/plain)
2020-10-28 13:18 UTC, T M Nowak
Details
patched login/logout session (348.31 KB, text/plain)
2020-10-28 13:19 UTC, T M Nowak
Details
patch for plasma-workspace 5.20.2 used on Arch Linux reverting 9be7dedb87ea574916f0f8a2837eaf7b19a7a166 (4.50 KB, patch)
2020-10-28 13:20 UTC, T M Nowak
Details

Note You need to log in before you can comment on or make changes to this bug.
Description BingMyBong 2020-07-19 12:20:51 UTC
SUMMARY
After the 1st logout of a session and subsequent login/logout sessions i get an increasing number of coredumps in /var/lib/systemd/coredump that I have to keep deleting mainly from core.kglobalaccel5 , core.kded5, core.plasmashell, kdeconnectd, klauncher, drkonqi, kscreen_backend, kactivitymanage and all for various users. Its a 150+ plus per day and i eventually have to reboot because after a few sessions, my machine makes a horrible screaming noise that does not go away until rebooted.

STEPS TO REPRODUCE
1. Login
2. Logout
3. Repeat 1 and 2
See below for a fuller description of the sequence of events - I also have all the coredumps files saved if anyone want to see them.

OBSERVED RESULT
coredumps in /var/lib/systemd/coredump

EXPECTED RESULT
No coredumps

SOFTWARE/OS VERSIONS
opensuse:tumbleweed:20200717
Qt: 5.15.0 KDE Frameworks: 5.72.0 - KDE Plasma:  5.19.3 - kwin 5.19.3
kmail2 5.14.3 (20.04.3) - akonadiserver 5.14.3 (20.04.3) - Kernel:  5.7.7-1-default  - xf86-video-nouveau:  1.0.16


ADDITIONAL INFORMATION
I was using "root" on ctrl-alt_f1 to monitor /var/lib/systemd/coredump
With the first 3 logouts from 3 different users, i used "logout" from the menu and pressed "ok" on the sddm confirmation screen.

1.1st Login/logout:
This was my first login after clean reboot, i ran kmail and vivaldi then logged out. Once the SDDM login screen was shown, the following
coredumps appeared -
-rw-r-----+ 1 is users  1015428 Jul 19 12:01 core.drkonqi.1002.b249b78ca7c8444f83be0aff12e3c849.3059.1595156511000000000000.lz4
-rw-r-----+ 1 is users 24397231 Jul 19 12:01 core.plasmashell.1002.b249b78ca7c8444f83be0aff12e3c849.1548.1595156514000000000000.lz4
-rw-r-----+ 1 is users  1330268 Jul 19 12:01 core.plasmashell.1002.b249b78ca7c8444f83be0aff12e3c849.3058.1595156511000000000000.lz4

2.2nd Login/logout:
I went to ctrl-alt-F1 to move the coredumps to a folder then returned ctrl-alt-F7 to login
This was my second login, i ran firefox then logged out. Once the SDDM login screen was shown, the following coredumps appeared -
-rw-r-----+ 1 is users  1015382 Jul 19 12:07 core.drkonqi.1000.b249b78ca7c8444f83be0aff12e3c849.4601.1595156845000000000000.lz4
-rw-r-----+ 1 is users 24988261 Jul 19 12:07 core.plasmashell.1000.b249b78ca7c8444f83be0aff12e3c849.3302.1595156848000000000000.lz4
-rw-r-----+ 1 is users  1330093 Jul 19 12:07 core.plasmashell.1000.b249b78ca7c8444f83be0aff12e3c849.4599.1595156845000000000000.lz4

3.3rd Login/logout:
I went to ctrl-alt-F1 to move the coredumps to a folder then returned ctrl-alt-F7 to login
This was my second login, i ran firefox then logged out. Once the SDDM login screen was shown, no coredumps on this user logout


With the next 3 logouts from 3 different users, i used "logout" from the menu and pressed the Logout icon on the sddm confirmation screen.

4.4thLogin/logout :
I went to ctrl-alt-F1 to move the previous coredumps to a folder then returned ctrl-alt-F7 to login
I ran kmail and vivaldi then i used "logout" from the menu and pressed the Logout icon on the sddm confirmation screen. 
Once the SDDM login screen was shown, the following coredumps appeared -
-rw-r-----+ 1 is users 734337 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7350.1595157482000000000000.lz4
-rw-r-----+ 1 is users 734590 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7401.1595157487000000000000.lz4
-rw-r-----+ 1 is users 734665 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7407.1595157488000000000000.lz4
-rw-r-----+ 1 is users 734681 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7424.1595157490000000000000.lz4
-rw-r-----+ 1 is users 734959 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7437.1595157491000000000000.lz4
-rw-r-----+ 1 is users 735190 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7450.1595157493000000000000.lz4
-rw-r-----+ 1 is users 734887 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7463.1595157494000000000000.lz4
-rw-r-----+ 1 is users 734506 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7476.1595157496000000000000.lz4
-rw-r-----+ 1 is users 734848 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7489.1595157497000000000000.lz4
-rw-r-----+ 1 is users 734722 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7502.1595157499000000000000.lz4
-rw-r-----+ 1 is users 734739 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7515.1595157500000000000000.lz4
-rw-r-----+ 1 is users 734422 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7527.1595157501000000000000.lz4
-rw-r-----+ 1 is users 734479 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7540.1595157503000000000000.lz4
-rw-r-----+ 1 is users 734489 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7552.1595157504000000000000.lz4
-rw-r-----+ 1 is users 734886 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7565.1595157506000000000000.lz4
-rw-r-----+ 1 is users 734580 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7577.1595157507000000000000.lz4
-rw-r-----+ 1 is users 734336 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7592.1595157508000000000000.lz4
-rw-r-----+ 1 is users 734740 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7605.1595157510000000000000.lz4
-rw-r-----+ 1 is users 734723 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7617.1595157511000000000000.lz4
-rw-r-----+ 1 is users 734861 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7630.1595157513000000000000.lz4
-rw-r-----+ 1 is users 734985 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7642.1595157514000000000000.lz4
-rw-r-----+ 1 is users 734486 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7654.1595157515000000000000.lz4
-rw-r-----+ 1 is users 734801 Jul 19 12:18 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7666.1595157517000000000000.lz4

5.attempt_5th_login:
I went to ctrl-alt-F1 to move the previous coredumps to a folder then returned ctrl-alt-F7 to login
I got halfway inputting the password and the SDDM login screen flickered and disappeared and my previous desktop appeared. 
You could press the icons on the taskbar and it attempts to run them (i.e. the cursor pointer changes briefly) and the following 
coredumps appeared -
-rw-r-----+ 1 is users  1071595 Jul 19 12:21 core.kdeconnectd.1002.b249b78ca7c8444f83be0aff12e3c849.7948.1595157661000000000000.lz4
-rw-r-----+ 1 is users  1071377 Jul 19 12:21 core.kdeconnectd.1002.b249b78ca7c8444f83be0aff12e3c849.7959.1595157664000000000000.lz4
-rw-r-----+ 1 is users  1071886 Jul 19 12:21 core.kdeconnectd.1002.b249b78ca7c8444f83be0aff12e3c849.7975.1595157666000000000000.lz4
-rw-r-----+ 1 is users  1071675 Jul 19 12:21 core.kdeconnectd.1002.b249b78ca7c8444f83be0aff12e3c849.7986.1595157667000000000000.lz4
-rw-r-----+ 1 is users  1071739 Jul 19 12:21 core.kdeconnectd.1002.b249b78ca7c8444f83be0aff12e3c849.7998.1595157668000000000000.lz4
-rw-r-----+ 1 is users   700309 Jul 19 12:21 core.kded5.1002.b249b78ca7c8444f83be0aff12e3c849.8068.1595157687000000000000.lz4
-rw-r-----+ 1 is users   700043 Jul 19 12:21 core.kded5.1002.b249b78ca7c8444f83be0aff12e3c849.8092.1595157695000000000000.lz4
-rw-r-----+ 1 is users   700069 Jul 19 12:21 core.kded5.1002.b249b78ca7c8444f83be0aff12e3c849.8128.1595157704000000000000.lz4
-rw-r-----+ 1 is users   700448 Jul 19 12:21 core.kded5.1002.b249b78ca7c8444f83be0aff12e3c849.8136.1595157707000000000000.lz4
-rw-r-----+ 1 is users   734236 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7703.1595157632000000000000.lz4
-rw-r-----+ 1 is users   734839 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7719.1595157634000000000000.lz4
-rw-r-----+ 1 is users   734739 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7763.1595157641000000000000.lz4
-rw-r-----+ 1 is users   734948 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7777.1595157642000000000000.lz4
-rw-r-----+ 1 is users   734689 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7791.1595157644000000000000.lz4
-rw-r-----+ 1 is users   734272 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7804.1595157645000000000000.lz4
-rw-r-----+ 1 is users   735067 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7818.1595157647000000000000.lz4
-rw-r-----+ 1 is users   734137 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7830.1595157648000000000000.lz4
-rw-r-----+ 1 is users   734782 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7842.1595157650000000000000.lz4
-rw-r-----+ 1 is users   735020 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7854.1595157651000000000000.lz4
-rw-r-----+ 1 is users   734485 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7868.1595157653000000000000.lz4
-rw-r-----+ 1 is users   734469 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7880.1595157654000000000000.lz4
-rw-r-----+ 1 is users   734360 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7892.1595157655000000000000.lz4
-rw-r-----+ 1 is users   734449 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7904.1595157657000000000000.lz4
-rw-r-----+ 1 is users   734489 Jul 19 12:20 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7917.1595157658000000000000.lz4
-rw-r-----+ 1 is users   734392 Jul 19 12:21 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7929.1595157659000000000000.lz4
-rw-r-----+ 1 is users   734896 Jul 19 12:21 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.7941.1595157661000000000000.lz4
-rw-r-----+ 1 is users   734460 Jul 19 12:21 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8019.1595157685000000000000.lz4
-rw-r-----+ 1 is users   734441 Jul 19 12:21 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8037.1595157686000000000000.lz4
-rw-r-----+ 1 is users   935001 Jul 19 12:20 core.klauncher.1002.b249b78ca7c8444f83be0aff12e3c849.7752.1595157639000000000000.lz4
-rw-r-----+ 1 is users   625913 Jul 19 12:21 core.kscreen_backend.1002.b249b78ca7c8444f83be0aff12e3c849.8056.1595157687000000000000.lz4
-rw-r-----+ 1 is users   626253 Jul 19 12:21 core.kscreen_backend.1002.b249b78ca7c8444f83be0aff12e3c849.8098.1595157695000000000000.lz4
-rw-r-----+ 1 is users 23394535 Jul 19 12:21 core.plasmashell.1002.b249b78ca7c8444f83be0aff12e3c849.6467.1595157692000000000000.lz4


6.after_ctrl_bspace:
I went to ctrl-alt-F1 to move the previous coredumps to a folder then returned ctrl-alt-F7 to the fake desktop
To clear the fake desktop that appeared in pt 5, i pressed ctrl-alt-bspace-bspace and the following coredumps appeared - 
-rw-r-----+ 1 is users   735817 Jul 19 12:24 core.kactivitymanage.1002.b249b78ca7c8444f83be0aff12e3c849.8259.1595157873000000000000.lz4
-rw-r-----+ 1 is users   700416 Jul 19 12:23 core.kded5.1002.b249b78ca7c8444f83be0aff12e3c849.8174.1595157829000000000000.lz4
-rw-r-----+ 1 is users   700273 Jul 19 12:23 core.kded5.1002.b249b78ca7c8444f83be0aff12e3c849.8196.1595157833000000000000.lz4
-rw-r-----+ 1 is users   734313 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8288.1595157879000000000000.lz4
-rw-r-----+ 1 is users   734583 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8304.1595157883000000000000.lz4
-rw-r-----+ 1 is users   734376 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8319.1595157886000000000000.lz4
-rw-r-----+ 1 is users   734450 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8332.1595157888000000000000.lz4
-rw-r-----+ 1 is users   734785 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8349.1595157890000000000000.lz4
-rw-r-----+ 1 is users   734713 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8361.1595157892000000000000.lz4
-rw-r-----+ 1 is users   734414 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8374.1595157893000000000000.lz4
-rw-r-----+ 1 is users   734609 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8386.1595157894000000000000.lz4
-rw-r-----+ 1 is users   734428 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8398.1595157896000000000000.lz4
-rw-r-----+ 1 is users   734895 Jul 19 12:24 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8411.1595157897000000000000.lz4
-rw-r-----+ 1 is users   734458 Jul 19 12:25 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8423.1595157898000000000000.lz4
-rw-r-----+ 1 is users   734661 Jul 19 12:25 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8435.1595157900000000000000.lz4
-rw-r-----+ 1 is users   733922 Jul 19 12:25 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8447.1595157901000000000000.lz4
-rw-r-----+ 1 is users   734675 Jul 19 12:25 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8459.1595157902000000000000.lz4
-rw-r-----+ 1 is users   734398 Jul 19 12:25 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8471.1595157904000000000000.lz4
-rw-r-----+ 1 is users   734959 Jul 19 12:25 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8483.1595157905000000000000.lz4
-rw-r-----+ 1 is users   734164 Jul 19 12:25 core.kglobalaccel5.1002.b249b78ca7c8444f83be0aff12e3c849.8495.1595157906000000000000.lz4
-rw-r-----+ 1 is users 30786358 Jul 19 12:24 core.plasmashell.1002.b249b78ca7c8444f83be0aff12e3c849.7313.1595157872000000000000.lz4
-rw-r-----  1 is users 14067405 Jul 19 12:24 core.sddm-greeter.474.b249b78ca7c8444f83be0aff12e3c849.7381.1595157871000000000000.lz4

7.5th:
I went to ctrl-alt-F1 to move the previous coredumps to a folder then returned ctrl-alt-F7 to login
I input the password and the splash screen displayed and it never got any further so i pressed ctrl-alt-bspace-bspace
-rw-r-----+ 1 is users 735428 Jul 19 12:28 core.kglobalaccel5.1000.b249b78ca7c8444f83be0aff12e3c849.9684.1595158120000000000000.lz4

8.failed_5th_login:
I went to ctrl-alt-F1 to move the previous coredumps to a folder then returned ctrl-alt-F7 to login
-rw-r-----+ 1 is users 1015068 Jul 19 12:34 core.drkonqi.1001.b249b78ca7c8444f83be0aff12e3c849.10292.1595158437000000000000.lz4
-rw-r-----+ 1 is users 3720439 Jul 19 12:34 core.kdeconnectd.1001.b249b78ca7c8444f83be0aff12e3c849.9980.1595158450000000000000.lz4
-rw-r-----+ 1 is users  934833 Jul 19 12:34 core.klauncher.1001.b249b78ca7c8444f83be0aff12e3c849.10308.1595158437000000000000.lz4
Comment 1 Nate Graham 2020-07-23 18:54:16 UTC
Can you attach a backtrace from one of the crash reports? The fact that something is crashing over and over again is obviously a problem, but we need to know the details so we can fix it. :)

See https://community.kde.org/Get_Involved/Issue_Reporting#Crash_reports_must_include_backtraces
Comment 2 BingMyBong 2020-07-24 06:33:13 UTC
(In reply to Nate Graham from comment #1)
> Can you attach a backtrace from one of the crash reports? The fact that
> something is crashing over and over again is obviously a problem, but we
> need to know the details so we can fix it. :)
> 
> See
> https://community.kde.org/Get_Involved/
> Issue_Reporting#Crash_reports_must_include_backtraces

TO be honest, i have no idea how to do that. I'll need a bit of instruction if someone has the time or written instructions to follow. They only seem to be created as a result of or during the logout/login process.  I do not get any chance to create a backtrace via drkonqi as that never appears and its also subject to coredumping.
Comment 3 Christoph Feck 2020-07-24 12:49:56 UTC
You can use gdb on a coredump file to get the backtrace. For details, Hopefully, you didn't remove the all yet, and the binaries are still the same.

For details, see e.g. https://stackoverflow.com/questions/5745215/getting-stacktrace-from-core-dump
Comment 4 BingMyBong 2020-07-24 13:47:30 UTC
(In reply to Christoph Feck from comment #3)
> You can use gdb on a coredump file to get the backtrace. For details,
> Hopefully, you didn't remove the all yet, and the binaries are still the
> same.
> 
> For details, see e.g.
> https://stackoverflow.com/questions/5745215/getting-stacktrace-from-core-dump

Thanks for that.  I kept all of them just in case.  I'll do the first plasmashell coredump and post that.
Comment 5 BingMyBong 2020-07-24 14:39:01 UTC
Created attachment 130366 [details]
backtrace file

Full backtrace
Comment 6 BingMyBong 2020-07-24 14:39:42 UTC
Created attachment 130367 [details]
backtrace file plasmashell 1

backtrace file plasmashell 1
Comment 7 BingMyBong 2020-07-24 14:40:15 UTC
Created attachment 130368 [details]
backtrace file plasmashell 2

backtrace file plasmashell 2
Comment 8 BingMyBong 2020-07-24 14:41:09 UTC
I've added 3 backtrace (full) files for the following files:
gdb-drkonqi-backtrace-full
gdb-plasmashell-3058-backtrace-full
gdb-plasmashell-1548-backtrace-full

They are for the following files i will attach these if you request them (in lz4 format), they are in creation time order
3817472  core.drkonqi.1002.b249b78ca7c8444f83be0aff12e3c849.3059.1595156511000000000000
4726784  core.plasmashell.1002.b249b78ca7c8444f83be0aff12e3c849.3058.1595156511000000000000
482148352 core.plasmashell.1002.b249b78ca7c8444f83be0aff12e3c849.1548.1595156514000000000000

They all relate to the first logout only.

The backtrace files all displayed these few lines but i didn't know what to do about it.
warning: Ignoring non-absolute filename: <linux-vdso.so.1>
Missing separate debuginfo for linux-vdso.so.1
Try: zypper install -C "debuginfo(build-id)=f4493ee3352f494cabebd179165698eb38cd5c5c"
Comment 9 Nate Graham 2020-07-29 23:05:44 UTC
Thanks. The backtraces are beyond my ability to interpret, so I'll hand it off to other more experience devs at this point.
Comment 10 BingMyBong 2020-07-30 06:21:05 UTC
(In reply to Nate Graham from comment #9)
> Thanks. The backtraces are beyond my ability to interpret, so I'll hand it
> off to other more experience devs at this point.

Thanks.
I'd like to add that these three appear after a logout and before a login, i originally thought a login also created these coredumps.  The login might create the other coredumps like kglobalaccel5  kded5 etc but I'll log those later with backtraces and link them to this bug report
Comment 11 Nate Graham 2020-08-02 15:07:51 UTC
FWIW I'm also seeing kglobalaccel crashing in a loop. See Bug 424931.
Comment 12 Nate Graham 2020-08-02 15:08:12 UTC
Actually the backtrace in my bug looks identical to yours!
Comment 13 Nate Graham 2020-08-02 15:08:18 UTC
*** Bug 424931 has been marked as a duplicate of this bug. ***
Comment 14 Nate Graham 2020-08-02 15:14:35 UTC
I'm able to reproduce the same crash here in my everything-built-from-source plasma session. I am seeing crashes in kglobalaccel and kactivitymanagerd. All of the crashes have the same backtrace as the ones for BingmyBong's plasmashell crashes:

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007f4fb2d26539 in __GI_abort () at abort.c:79
#2  0x00007f4fb2f68c27 in qt_message_fatal (message=<synthetic pointer>..., context=...) at global/qlogging.cpp:1914
#3  QMessageLogger::fatal (this=this@entry=0x7ffd8da2e2b0, msg=msg@entry=0x7f4fb3a82f05 "%s") at global/qlogging.cpp:893
#4  0x00007f4fb35b96d4 in init_platform (argv=<optimized out>, argc=@0x7ffd8da2e4fc: 1, platformThemeName=..., platformPluginPath=..., 
    pluginNamesWithArguments=...) at ../../include/QtCore/../../src/corelib/tools/qarraydata.h:208
#5  QGuiApplicationPrivate::createPlatformIntegration (this=0x5637552696f0) at kernel/qguiapplication.cpp:1481
#6  0x00007f4fb35b9b60 in QGuiApplicationPrivate::createEventDispatcher (this=<optimized out>) at kernel/qguiapplication.cpp:1498
#7  0x00007f4fb3188696 in QCoreApplicationPrivate::init (this=this@entry=0x5637552696f0) at kernel/qcoreapplication.cpp:852
#8  0x00007f4fb35bcaaf in QGuiApplicationPrivate::init (this=0x5637552696f0) at kernel/qguiapplication.cpp:1527
#9  0x00007f4fb35bd9e4 in QGuiApplication::QGuiApplication (this=0x7ffd8da2e540, argc=@0x7ffd8da2e4fc: 1, argv=0x7ffd8da2e6c8, flags=331520)
    at kernel/qguiapplication.h:203
#10 0x0000563753fd125a in main (argc=<optimized out>, argv=0x7ffd8da2e550) at /usr/src/debug/kglobalaccel-5.72.0-1.1.x86_64/src/runtime/main.cpp:47

Something odd is going on...

Notably we're both using openSUSE Tumbleweed with Qt 5.15.0.
Comment 15 BingMyBong 2020-08-02 16:19:42 UTC
(In reply to Nate Graham from comment #14)
> I'm able to reproduce the same crash here in my everything-built-from-source
> plasma session. I am seeing crashes in kglobalaccel and kactivitymanagerd.
> All of the crashes have the same backtrace as the ones for BingmyBong's
> plasmashell crashes:
> 
> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1  0x00007f4fb2d26539 in __GI_abort () at abort.c:79
> #2  0x00007f4fb2f68c27 in qt_message_fatal (message=<synthetic pointer>...,
> context=...) at global/qlogging.cpp:1914
> #3  QMessageLogger::fatal (this=this@entry=0x7ffd8da2e2b0,
> msg=msg@entry=0x7f4fb3a82f05 "%s") at global/qlogging.cpp:893
> #4  0x00007f4fb35b96d4 in init_platform (argv=<optimized out>,
> argc=@0x7ffd8da2e4fc: 1, platformThemeName=..., platformPluginPath=..., 
>     pluginNamesWithArguments=...) at
> ../../include/QtCore/../../src/corelib/tools/qarraydata.h:208
> #5  QGuiApplicationPrivate::createPlatformIntegration (this=0x5637552696f0)
> at kernel/qguiapplication.cpp:1481
> #6  0x00007f4fb35b9b60 in QGuiApplicationPrivate::createEventDispatcher
> (this=<optimized out>) at kernel/qguiapplication.cpp:1498
> #7  0x00007f4fb3188696 in QCoreApplicationPrivate::init
> (this=this@entry=0x5637552696f0) at kernel/qcoreapplication.cpp:852
> #8  0x00007f4fb35bcaaf in QGuiApplicationPrivate::init (this=0x5637552696f0)
> at kernel/qguiapplication.cpp:1527
> #9  0x00007f4fb35bd9e4 in QGuiApplication::QGuiApplication
> (this=0x7ffd8da2e540, argc=@0x7ffd8da2e4fc: 1, argv=0x7ffd8da2e6c8,
> flags=331520)
>     at kernel/qguiapplication.h:203
> #10 0x0000563753fd125a in main (argc=<optimized out>, argv=0x7ffd8da2e550)
> at /usr/src/debug/kglobalaccel-5.72.0-1.1.x86_64/src/runtime/main.cpp:47
> 
> Something odd is going on...
> 
> Notably we're both using openSUSE Tumbleweed with Qt 5.15.0.

Also I only started to notice it from 5.19.0 release because that's when the logout/lock widget started to break.
Comment 16 BingMyBong 2020-08-14 13:34:21 UTC
Since yesterdays upgrade to plasma-frameworks 5.73, lots of the other coredumps from kglobalaccel5, kdeconnectd, kactivitymanage, klauncher etc have ceased.

But i still get the logout coredumps from plasmashell and drkonqi, here is todays list.

Aug 14 08:51 core.drkonqi.1002.339aaeba2d3e4f6e8620d72043084edd.2556.1597391457000000000000.lz4
Aug 14 08:51 core.plasmashell.1002.339aaeba2d3e4f6e8620d72043084edd.2553.1597391457000000000000.lz4
Aug 14 08:51 core.plasmashell.1002.339aaeba2d3e4f6e8620d72043084edd.1479.1597391461000000000000.lz4
Aug 14 09:39 core.drkonqi.1001.339aaeba2d3e4f6e8620d72043084edd.9314.1597394354000000000000.lz
Aug 14 09:39 core.plasmashell.1001.339aaeba2d3e4f6e8620d72043084edd.9313.1597394354000000000000.lz4
Aug 14 09:39 core.plasmashell.1001.339aaeba2d3e4f6e8620d72043084edd.2812.1597394357000000000000.lz4
Aug 14 12:29 core.drkonqi.1000.339aaeba2d3e4f6e8620d72043084edd.21878.1597404549000000000000.lz4
Aug 14 12:29 core.plasmashell.1000.339aaeba2d3e4f6e8620d72043084edd.21877.1597404549000000000000.lz4
Aug 14 12:29 core.plasmashell.1000.339aaeba2d3e4f6e8620d72043084edd.9547.1597404558000000000000.lz4
Comment 17 BingMyBong 2020-08-14 14:39:20 UTC
(In reply to BingMyBong from comment #16)
> Since yesterdays upgrade to plasma-frameworks 5.73, lots of the other
> coredumps from kglobalaccel5, kdeconnectd, kactivitymanage, klauncher etc
> have ceased.
> 
> But i still get the logout coredumps from plasmashell and drkonqi, here is
> todays list.
> 
> Aug 14 08:51
> core.drkonqi.1002.339aaeba2d3e4f6e8620d72043084edd.2556.
> 1597391457000000000000.lz4
> Aug 14 08:51
> core.plasmashell.1002.339aaeba2d3e4f6e8620d72043084edd.2553.
> 1597391457000000000000.lz4
> Aug 14 08:51
> core.plasmashell.1002.339aaeba2d3e4f6e8620d72043084edd.1479.
> 1597391461000000000000.lz4
> Aug 14 09:39
> core.drkonqi.1001.339aaeba2d3e4f6e8620d72043084edd.9314.
> 1597394354000000000000.lz
> Aug 14 09:39
> core.plasmashell.1001.339aaeba2d3e4f6e8620d72043084edd.9313.
> 1597394354000000000000.lz4
> Aug 14 09:39
> core.plasmashell.1001.339aaeba2d3e4f6e8620d72043084edd.2812.
> 1597394357000000000000.lz4
> Aug 14 12:29
> core.drkonqi.1000.339aaeba2d3e4f6e8620d72043084edd.21878.
> 1597404549000000000000.lz4
> Aug 14 12:29
> core.plasmashell.1000.339aaeba2d3e4f6e8620d72043084edd.21877.
> 1597404549000000000000.lz4
> Aug 14 12:29
> core.plasmashell.1000.339aaeba2d3e4f6e8620d72043084edd.9547.
> 1597404558000000000000.lz4

Update:
Just got a kglobalaccel5 coredump when i used the logout/lock widget instead of logging out via the menu option.
Comment 18 Patrick Silva 2020-08-14 15:00:12 UTC
On my Arch Linux kglobalaccel5 is still crashing after logout from X11 session.
Comment 19 BingMyBong 2020-08-15 14:45:49 UTC
(In reply to BingMyBong from comment #17)
> (In reply to BingMyBong from comment #16)
> > Since yesterdays upgrade to plasma-frameworks 5.73, lots of the other
> > coredumps from kglobalaccel5, kdeconnectd, kactivitymanage, klauncher etc
> > have ceased.
> > 
> > But i still get the logout coredumps from plasmashell and drkonqi, here is
> > todays list.
> > 
> > Aug 14 08:51
> > core.drkonqi.1002.339aaeba2d3e4f6e8620d72043084edd.2556.
> > 1597391457000000000000.lz4
> > Aug 14 08:51
> > core.plasmashell.1002.339aaeba2d3e4f6e8620d72043084edd.2553.
> > 1597391457000000000000.lz4
> > Aug 14 08:51
> > core.plasmashell.1002.339aaeba2d3e4f6e8620d72043084edd.1479.
> > 1597391461000000000000.lz4
> > Aug 14 09:39
> > core.drkonqi.1001.339aaeba2d3e4f6e8620d72043084edd.9314.
> > 1597394354000000000000.lz
> > Aug 14 09:39
> > core.plasmashell.1001.339aaeba2d3e4f6e8620d72043084edd.9313.
> > 1597394354000000000000.lz4
> > Aug 14 09:39
> > core.plasmashell.1001.339aaeba2d3e4f6e8620d72043084edd.2812.
> > 1597394357000000000000.lz4
> > Aug 14 12:29
> > core.drkonqi.1000.339aaeba2d3e4f6e8620d72043084edd.21878.
> > 1597404549000000000000.lz4
> > Aug 14 12:29
> > core.plasmashell.1000.339aaeba2d3e4f6e8620d72043084edd.21877.
> > 1597404549000000000000.lz4
> > Aug 14 12:29
> > core.plasmashell.1000.339aaeba2d3e4f6e8620d72043084edd.9547.
> > 1597404558000000000000.lz4
> 
> Update:
> Just got a kglobalaccel5 coredump when i used the logout/lock widget instead
> of logging out via the menu option.

UPDATE:  i spoke too soon, i've now got loads of kglobalaccel5 coredumps today.
Comment 20 BingMyBong 2020-09-14 17:18:06 UTC
My plasmashell crashes seem to have stopped.  All i've done today is "KillUserProcesses=yes in /etc/systemd/logind.conf" and "sudo systemctl disable --now lvm2-monitor" and rebooted.

I'm still getting coredumps of kglobalaccel5 (8 files), klauncher, kscreen_backend, kded5 for one user and kglobalaccel5, kded5 (2 files), drkonqi for a different user.
I cant get any backtraces for these files -  i get this error "Missing separate debuginfo for the main executable file" from gdb.
Comment 21 David Edmundson 2020-09-14 21:43:47 UTC
I wonder if some of these are surfaced by: 9be7dedb87ea574916f0f8a2837eaf7b19a7a166 in p-w.

The old XSManager would kill the app before X quit, which would prevent some of these?
Comment 22 Patrick Silva 2020-09-15 14:09:03 UTC
logout from Wayland session causes several coredumps on neon unstable.

Operating System: KDE neon Unstable Edition
KDE Plasma Version: 5.19.80
KDE Frameworks Version: 5.75.0
Qt Version: 5.15.0

Tue 2020-09-15 10:58:19 -03   15918  1000  1003   6 present   /usr/bin/distro-release-notifier
Tue 2020-09-15 10:58:22 -03   15839  1000  1003   6 present   /usr/lib/x86_64-linux-gnu/libexec/kactivitymanagerd
Tue 2020-09-15 10:58:24 -03   15708  1000  1003   6 present   /usr/bin/kwalletd5
Tue 2020-09-15 10:58:24 -03   15777  1000  1003   6 present   /usr/bin/kded5
Tue 2020-09-15 10:58:24 -03   24718  1000  1003   6 present   /usr/lib/x86_64-linux-gnu/libexec/drkonqi
Tue 2020-09-15 10:58:25 -03   24734  1000  1003   6 present   /usr/lib/x86_64-linux-gnu/libexec/drkonqi
Tue 2020-09-15 10:58:25 -03   24768  1000  1003   6 present   /usr/lib/x86_64-linux-gnu/libexec/drkonqi
Tue 2020-09-15 10:58:25 -03   24767  1000  1003   6 present   /usr/bin/kded5
Tue 2020-09-15 10:58:28 -03   19294  1000  1003   6 present   /usr/bin/krunner
Comment 23 T M Nowak 2020-10-28 13:16:26 UTC
(In reply to David Edmundson from comment #21)
> I wonder if some of these are surfaced by:
> 9be7dedb87ea574916f0f8a2837eaf7b19a7a166 in p-w.
> 
> The old XSManager would kill the app before X quit, which would prevent some
> of these?

Thanks, reverting this commit fixes massive core dumps spurt on logout. I'll attach login/logout session for x11 and wayland and a patch I used to revert the changes for plasma 5.20.2 on Arch Linux. But i think the underlying issue persist in which display server gets killed *before* processes relying on it.
Comment 24 T M Nowak 2020-10-28 13:18:22 UTC
Created attachment 132836 [details]
unpatched login/logout session
Comment 25 T M Nowak 2020-10-28 13:19:01 UTC
Created attachment 132837 [details]
patched login/logout session
Comment 26 T M Nowak 2020-10-28 13:20:59 UTC
Created attachment 132838 [details]
patch for plasma-workspace 5.20.2 used on Arch Linux reverting 9be7dedb87ea574916f0f8a2837eaf7b19a7a166
Comment 27 David Edmundson 2020-10-28 14:47:47 UTC
>But i think the underlying issue persist in which display server gets killed *before* processes relying on it.

Yeah, that's a somewhat separate issue. Session management will just help with the graceful teardown case.
Comment 28 David Edmundson 2020-11-07 23:48:43 UTC
*** Bug 422322 has been marked as a duplicate of this bug. ***
Comment 29 David Edmundson 2020-11-11 23:03:46 UTC
Git commit 9e641d41717911f835fba59eb5fab6bbf97f8502 by David Edmundson.
Committed on 11/11/2020 at 23:03.
Pushed by davidedmundson into branch 'master'.

Revert "Use new simpler way to disable session management in services"

The two ways of disabling session management have the same impact on the
session being saved, but there is one behavioural side-effect that
turned out to be less ideal.

By disabling completely we don't follow the session manager telling the
application to quit. That's not something needed with the systemd boot,
but for the legacy boot effectively we were just closing applications by
ripping the X connection away from under them.

Some applications are bad at handling this and this led to a bunch of
crashes or dangling processes at logout.

This reverts commit 9be7dedb87ea574916f0f8a2837eaf7b19a7a166.

M  +7    -1    gmenu-dbusmenu-proxy/main.cpp
M  +9    -1    krunner/main.cpp
M  +9    -1    shell/main.cpp
M  +7    -1    xembed-sni-proxy/main.cpp

https://invent.kde.org/plasma/plasma-workspace/commit/9e641d41717911f835fba59eb5fab6bbf97f8502
Comment 30 David Edmundson 2020-11-11 23:05:59 UTC
Git commit da34fd073f6b361fde1fdcee559d60e8c0268cd6 by David Edmundson.
Committed on 11/11/2020 at 23:05.
Pushed by davidedmundson into branch 'Plasma/5.20'.

Revert "Use new simpler way to disable session management in services"

The two ways of disabling session management have the same impact on the
session being saved, but there is one behavioural side-effect that
turned out to be less ideal.

By disabling completely we don't follow the session manager telling the
application to quit. That's not something needed with the systemd boot,
but for the legacy boot effectively we were just closing applications by
ripping the X connection away from under them.

Some applications are bad at handling this and this led to a bunch of
crashes or dangling processes at logout.

This reverts commit 9be7dedb87ea574916f0f8a2837eaf7b19a7a166.


(cherry picked from commit 9e641d41717911f835fba59eb5fab6bbf97f8502)

M  +7    -1    gmenu-dbusmenu-proxy/main.cpp
M  +9    -1    krunner/main.cpp
M  +9    -1    shell/main.cpp
M  +7    -1    xembed-sni-proxy/main.cpp

https://invent.kde.org/plasma/plasma-workspace/commit/da34fd073f6b361fde1fdcee559d60e8c0268cd6
Comment 31 Nate Graham 2020-11-15 14:17:25 UTC
Is there more to do here, or can this be closed?
Comment 32 T M Nowak 2020-11-15 20:02:53 UTC
(In reply to Nate Graham from comment #31)
> Is there more to do here, or can this be closed?

From my POV, yes, the apps still report that X11/display died, but there are no coredumps anymore. As I'm not the original submitter, it would be good to wait for feedback from others.
Comment 33 BingMyBong 2020-11-16 07:55:05 UTC
(In reply to Nate Graham from comment #31)
> Is there more to do here, or can this be closed?

I'm still getting coredumps but this could be a separate/new issue. I now get 9 instances of core.qdbus, one instance of kglobalaccel5, kded5, drkonqi, kded5 in that order every time now. 
I run a kdialog script on logout and it uses qdbus to display progress of my backups via rsync and this was working fine until recently, now it just shows the first message but completes the backup okay.
If this is deemed a separate issue, i'm happy for this to bug to be closed and i'll start a new one.
Comment 34 BingMyBong 2020-11-26 18:12:39 UTC
I've "fixed" most of my coredumps on logout (replaced qdbus with qdbus-qt5) so i'm just left with these ones
-rw-r-----+ 1 root root  769148 Nov 26 15:58 core.kglobalaccel5.1002.f4a0442535264a189e6774e195874e79.22064.1606406311000000.lz4
-rw-r-----+ 1 root root  768860 Nov 26 15:58 core.kglobalaccel5.1002.f4a0442535264a189e6774e195874e79.22076.1606406312000000.lz4
-rw-r-----+ 1 root root 1056917 Nov 26 15:58 core.drkonqi.1002.f4a0442535264a189e6774e195874e79.22080.1606406312000000.lz4
-rw-r-----+ 1 root root  734967 Nov 26 15:58 core.kded5.1002.f4a0442535264a189e6774e195874e79.22079.1606406312000000.lz4
-rw-r-----+ 1 root root  769476 Nov 26 15:58 core.kglobalaccel5.1002.f4a0442535264a189e6774e195874e79.22107.1606406312000000.lz4
-rw-r-----+ 1 root root  769270 Nov 26 15:58 core.kglobalaccel5.1002.f4a0442535264a189e6774e195874e79.22141.1606406313000000.lz4
-rw-r-----+ 1 root root  769128 Nov 26 15:58 core.kglobalaccel5.1002.f4a0442535264a189e6774e195874e79.22168.1606406314000000.lz4
-rw-r-----+ 1 root root 5214281 Nov 26 15:58 core.kded5.1002.f4a0442535264a189e6774e195874e79.18160.1606406313000000.lz4
Comment 35 Jazz 2020-12-04 14:28:23 UTC
I use Manjaro KDE, with Plasma 5.20.4 and am affected by this bug. Every time I logout/login, the system usually hangs for long and there are a few coredumps more in my /var/lib/systemd/coredump:

core.pamac-tray-appi*.zst
core.plasmashell*.zst

I tried to fix the issue by the following changes, but it didn't help:

~/.xinitrc:
DEFAULT_SESSION=startplasma-x11

/etc/systemd/logind.conf.d/override.conf:
KillUserProcesses=yes


The weirdest part is that the bug doesn't seem consistent. I continued to test logout/login multiple times with completely different results:

reboot > login as a new user A > logout > login > huge lag > system seems to work normally > logout > login as my default user B > huge lag > systems seems to work normally > logout > login as B > no lag(!) > systems seems to work normally > logout > login as B > huge lag > can’t see my desktop normally, can’t minimize the only window that appeared on screen: Rambox, I could just go to TTY to reboot the system > login as B > logout > login as B > huge lag > not all services has been loaded and my tiling window manager doesn’t work > shutdown
Comment 36 Bruno Filipe 2020-12-04 22:36:52 UTC
I tested a clean install of Fedora KDE spin and it's reproduceable OOTB, both before and after full system upgrades. This breaks multi-user desktops and should be looked at with higher priority.
Comment 37 Tomasz Paweł Gajc 2020-12-11 14:20:57 UTC
Hi,

i have a fully updated OpenMandriva cooker with Plasma 5.20.4 and KDE Frameworks 5.77.0

[root@tpg-virtualbox systemd]# rpm -qa | grep plasma-workspace
plasma-workspace-wayland-5.20.4-3.x86_64
plasma-workspace-5.20.4-3.x86_64
plasma-workspace-x11-5.20.4-3.x86_64

After hitting logout on Plasma Wayland session i see these in coredumpctl:
Fri 2020-12-11 12:30:07 CET    1955  1001  1006   6 present   /usr/lib64/libexec/drkonqi
Fri 2020-12-11 12:30:08 CET     896  1001  1006   6 present   /usr/lib64/libexec/kdeconnectd
Fri 2020-12-11 12:30:08 CET     921  1001  1006   6 present   /usr/lib64/libexec/org_kde_powerdevil
Fri 2020-12-11 12:30:18 CET     843  1001  1006   6 present   /usr/bin/kded5
Fri 2020-12-11 12:30:18 CET    1954  1001  1006   6 present   /usr/bin/plasmashell


gru 11 12:29:56 tpg-virtualbox su[1089]: pam_unix(su:session): session closed for user root
gru 11 12:29:56 tpg-virtualbox systemd[754]: app-org.kde.konsole-919a169276f3431ba46e448492368421.scope: Succeeded.
gru 11 12:29:56 tpg-virtualbox plasmashell[1954]: file:///usr/lib64/qt5/qml/org/kde/plasma/components/ModelContextMenu.qml:38:1: QML ModelContextMenu: Ac>
gru 11 12:29:57 tpg-virtualbox plasmashell[1954]: file:///usr/share/plasma/plasmoids/org.kde.plasma.printmanager/contents/ui/PopupDialog.qml:91:17: Unabl>
gru 11 12:29:57 tpg-virtualbox kaccess[870]: The X11 connection broke (error 1). Did the X11 server die?
gru 11 12:29:57 tpg-virtualbox kactivitymanagerd[912]: The X11 connection broke (error 1). Did the X11 server die?
gru 11 12:29:57 tpg-virtualbox kernel: A fatal guest X Window error occurred.  This may just mean that the Window system was shut down while the client w>
gru 11 12:29:57 tpg-virtualbox kernel: A fatal guest X Window error occurred.  This may just mean that the Window system was shut down while the client w>
gru 11 12:29:57 tpg-virtualbox kded5[843]: ktp-kded-module: activity service not running, user account presences won't load or save
gru 11 12:29:57 tpg-virtualbox kernel: A fatal guest X Window error occurred.  This may just mean that the Window system was shut down while the client w>
gru 11 12:29:57 tpg-virtualbox kernel: Terminating ...
gru 11 12:29:57 tpg-virtualbox kernel: Terminating HGCM thread ...
gru 11 12:29:57 tpg-virtualbox kernel: Terminating X11 thread ...
gru 11 12:29:57 tpg-virtualbox systemd[754]: dbus-:1.2-org.kde.ActivityManager@0.service: Succeeded.
gru 11 12:29:57 tpg-virtualbox org_kde_powerdevil[921]: The Wayland connection broke. Did the Wayland compositor die?
gru 11 12:29:57 tpg-virtualbox kdeconnectd[896]: The Wayland connection broke. Did the Wayland compositor die?
gru 11 12:29:57 tpg-virtualbox drkonqi[1955]: The Wayland connection broke. Did the Wayland compositor die?
gru 11 12:29:57 tpg-virtualbox plasmashell[1954]: The Wayland connection broke. Did the Wayland compositor die?
gru 11 12:29:57 tpg-virtualbox kded5[843]: The Wayland connection broke. Did the Wayland compositor die?
gru 11 12:29:58 tpg-virtualbox drkonqi[1989]: Failed to create wl_display (No such file or directory)
gru 11 12:29:58 tpg-virtualbox drkonqi[1989]: qt.qpa.plugin: Could not load the Qt platform plugin "wayland" in "" even though it was found.
gru 11 12:29:58 tpg-virtualbox drkonqi[1989]: This application failed to start because no Qt platform plugin could be initialized. Reinstalling the appli>
                                              
                                              Available platform plugins are: wayland-org.kde.kwin.qpa, eglfs, wayland-egl, wayland, wayland-xcomposite-e>
gru 11 12:29:58 tpg-virtualbox systemd[1]: Created slice system-systemd\x2dcoredump.slice.
gru 11 12:29:58 tpg-virtualbox systemd[1]: Started Process Core Dump (PID 1994/UID 0).
gru 11 12:29:58 tpg-virtualbox drkonqi[1993]: Failed to create wl_display (No such file or directory)
gru 11 12:29:58 tpg-virtualbox drkonqi[1993]: qt.qpa.plugin: Could not load the Qt platform plugin "wayland" in "" even though it was found.
gru 11 12:29:58 tpg-virtualbox drkonqi[1993]: This application failed to start because no Qt platform plugin could be initialized. Reinstalling the appli>
                                              
                                              Available platform plugins are: wayland-org.kde.kwin.qpa, eglfs, wayland-egl, wayland, wayland-xcomposite-e>
gru 11 12:29:58 tpg-virtualbox kernel: Error waiting for X11 thread to terminate: VERR_TIMEOUT
gru 11 12:29:58 tpg-virtualbox kernel: Terminating threads done
gru 11 12:29:58 tpg-virtualbox sddm-helper[745]: pam_kwallet5(sddm:session): pam_kwallet5: pam_sm_close_session
gru 11 12:29:58 tpg-virtualbox sddm-helper[745]: pam_unix(sddm:session): session closed for user tpg
gru 11 12:29:58 tpg-virtualbox sddm-helper[745]: pam_kwallet5(sddm:setcred): pam_kwallet5: pam_sm_setcred
gru 11 12:29:58 tpg-virtualbox systemd-logind[541]: Session 2 logged out. Waiting for processes to exit.
Comment 38 Angry 2020-12-11 16:57:03 UTC
Similar issues here on OpenMandriva Cooker with Plasma 5.20.4 and Frameworks 5.77.0.

Xsession works fine.

On Wayland session.

First loging = crash:
https://pastebin.com/3Hius7Q2

Log out and again log in to Plasma session and then I see a lot of core dumps in /var/lib/systemd/coredump.
As example: 3x plasmashell, 2x kdeconnect, 2x kded5, 2x kde_powerde, akonadiderver or kwin_wayland.

Here is my part of journalctl with some logs and crash
https://pastebin.com/bs07i96x
Comment 39 Nate Graham 2020-12-22 22:00:43 UTC
At this point the most common ones have been fixed. For people still experiencing crashes on login/logout, please open new bu reports--one per crash. That will get them fixed faster than continuing to use this as a sort of umbrella bug report. Thanks!
Comment 40 Patrick Silva 2020-12-23 11:06:16 UTC
My bug reports related to login/logout:

bug 430736
bug 430740
bug 430741
bug 430739
Comment 41 Andrey 2020-12-23 15:09:15 UTC
(In reply to Patrick Silva from comment #40)
> My bug reports related to login/logout:
> 
> bug 430736
> bug 430740
> bug 430741
> bug 430739

I think it's better to add them to "See also" on top.
It will be hard to find them here.
https://bugs.kde.org/page.cgi?id=fields.html#see_also
Comment 42 Nate Graham 2021-02-24 04:53:54 UTC
*** Bug 424488 has been marked as a duplicate of this bug. ***
Comment 43 Nate Graham 2021-02-26 16:56:11 UTC
*** Bug 432216 has been marked as a duplicate of this bug. ***
Comment 44 Nate Graham 2021-02-26 17:05:16 UTC
*** Bug 409448 has been marked as a duplicate of this bug. ***
Comment 45 Nate Graham 2021-03-09 03:58:25 UTC
*** Bug 421436 has been marked as a duplicate of this bug. ***
Comment 46 Nate Graham 2021-03-10 19:09:54 UTC
*** Bug 392556 has been marked as a duplicate of this bug. ***
Comment 47 Nate Graham 2021-03-10 19:10:08 UTC
*** Bug 393352 has been marked as a duplicate of this bug. ***
Comment 48 Nate Graham 2021-03-10 19:10:12 UTC
*** Bug 399175 has been marked as a duplicate of this bug. ***