Bug 429627 - open firefox causes crash of kde plasmashell during resume after hibernate on wayland
Summary: open firefox causes crash of kde plasmashell during resume after hibernate on...
Status: RESOLVED WORKSFORME
Alias: None
Product: kwin
Classification: Plasma
Component: wayland-generic (show other bugs)
Version: 5.21.3
Platform: openSUSE Linux
: NOR normal
Target Milestone: ---
Assignee: KWin default assignee
URL:
Keywords: wayland
Depends on:
Blocks:
 
Reported: 2020-11-25 09:06 UTC by Walther Pelser
Modified: 2022-05-19 04:35 UTC (History)
4 users (show)

See Also:
Latest Commit:
Version Fixed In:
Sentry Crash Report:


Attachments
systemd-coredump 07.03.21 12:22.txt (8.85 KB, text/plain)
2021-03-07 15:54 UTC, Walther Pelser
Details
search 30.000 lines of yournald item "die" (2 bytes, text/plain)
2021-03-07 15:57 UTC, Walther Pelser
Details
kwin_wayland coredump (18.43 KB, text/plain)
2021-03-08 17:53 UTC, Walther Pelser
Details
wayland-session.log (35.78 KB, text/x-log)
2021-03-08 17:58 UTC, Walther Pelser
Details
plasmashell coredumpctl 17.03.21.txt (38.80 KB, text/plain)
2021-03-17 15:38 UTC, Walther Pelser
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Walther Pelser 2020-11-25 09:06:17 UTC
SUMMARY
After resume from hibernate there is lot of CPU usage (ca 50% on 4 core CPU) for ca. 1 min or so

STEPS TO REPRODUCE
1. Always after resume from hibernate
2. 
3. 

OBSERVED RESULT
After resume from hibernate I can recognize high CPU usage in the background for ca. 1 min or so. The sound of my HDD is similar to a restart of plasma. Some times plasma screen is completely rebuild, but mostly not. KSysGuard does not show me any application, which uses so much CPU %.
I could not find a log file for that.

EXPECTED RESULT
After resume from hibernate plasma should behave in the way as it does always after a normal boot. This means, CPU usage becomes very small after e few seconds.

SOFTWARE/OS VERSIONS
Windows: 
macOS: 
Linux/KDE Plasma:
(available in About System)
KDE Plasma Version:  5.20.3
KDE Frameworks Version: 5.76.0
Qt Version: 5.15

ADDITIONAL INFORMATION
This bug  exists since plasma 5.20.x
Comment 1 Walther Pelser 2020-11-25 09:08:43 UTC
Additional comment: It is the wayland desktop, which is started with sddm.
Comment 2 Walther Pelser 2020-11-25 11:43:16 UTC
Since last reboot today not reproducible any more.
Comment 3 Walther Pelser 2020-11-25 16:05:43 UTC
Reopened because it occurred again. I guess it could have something to do with firefox. Before the sleep I worked mozregrssion.
This is the report, when plasmashell crashed after resume:
Application: Plasma (plasmashell), signal: Aborted

[New LWP 1727]
[New LWP 1888]
[New LWP 1889]
[New LWP 1890]
[New LWP 1891]
[New LWP 1892]
[New LWP 1900]
[New LWP 1907]
[New LWP 1915]
[New LWP 1916]
[New LWP 4551]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f7748c8583f in poll () from /lib64/libc.so.6
[Current thread is 1 (Thread 0x7f7746b6a840 (LWP 1717))]

Thread 12 (Thread 0x7f772600b640 (LWP 4551)):
#0  0x00007f77481d06b2 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f7749038dfb in QWaitCondition::wait(QMutex*, QDeadlineTimer) () at /usr/lib64/libQt5Core.so.5
#2  0x00007f774ad4cb57 in  () at /usr/lib64/libQt5Quick.so.5
#3  0x00007f774ad4efe9 in  () at /usr/lib64/libQt5Quick.so.5
#4  0x00007f7749032de1 in  () at /usr/lib64/libQt5Core.so.5
#5  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#6  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 11 (Thread 0x7f76feb5f640 (LWP 1916)):
#0  0x00007f77481d06b2 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f7749038dfb in QWaitCondition::wait(QMutex*, QDeadlineTimer) () at /usr/lib64/libQt5Core.so.5
#2  0x00007f774ad4cb57 in  () at /usr/lib64/libQt5Quick.so.5
#3  0x00007f774ad4efe9 in  () at /usr/lib64/libQt5Quick.so.5
#4  0x00007f7749032de1 in  () at /usr/lib64/libQt5Core.so.5
#5  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#6  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 10 (Thread 0x7f7708d75640 (LWP 1915)):
#0  0x00007f77481d1ba4 in pthread_getspecific () at /lib64/libpthread.so.0
#1  0x00007f774766e700 in g_thread_self () at /usr/lib64/libglib-2.0.so.0
#2  0x00007f774764511f in g_main_context_iteration () at /usr/lib64/libglib-2.0.so.0
#3  0x00007f7749269a9b in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#4  0x00007f7749210eeb in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#5  0x00007f7749031c9e in QThread::exec() () at /usr/lib64/libQt5Core.so.5
#6  0x00007f77092bb438 in KCupsConnection::run() () at /usr/lib64/libkcupslib.so
#7  0x00007f7749032de1 in  () at /usr/lib64/libQt5Core.so.5
#8  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#9  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 9 (Thread 0x7f7724c50640 (LWP 1907)):
#0  0x00007f77481d06b2 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f7749038dfb in QWaitCondition::wait(QMutex*, QDeadlineTimer) () at /usr/lib64/libQt5Core.so.5
#2  0x00007f774ad4cb57 in  () at /usr/lib64/libQt5Quick.so.5
#3  0x00007f774ad4efe9 in  () at /usr/lib64/libQt5Quick.so.5
#4  0x00007f7749032de1 in  () at /usr/lib64/libQt5Core.so.5
#5  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#6  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 8 (Thread 0x7f770bfff640 (LWP 1900)):
#0  0x00007f77476411bf in  () at /usr/lib64/libglib-2.0.so.0
#1  0x00007f7747642ee5 in  () at /usr/lib64/libglib-2.0.so.0
#2  0x00007f7747644403 in g_main_context_prepare () at /usr/lib64/libglib-2.0.so.0
#3  0x00007f7747644f3b in  () at /usr/lib64/libglib-2.0.so.0
#4  0x00007f774764512f in g_main_context_iteration () at /usr/lib64/libglib-2.0.so.0
#5  0x00007f7749269a9b in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#6  0x00007f7749210eeb in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#7  0x00007f7749031c9e in QThread::exec() () at /usr/lib64/libQt5Core.so.5
#8  0x00007f774aca0926 in  () at /usr/lib64/libQt5Quick.so.5
#9  0x00007f7749032de1 in  () at /usr/lib64/libQt5Core.so.5
#10 0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#11 0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 7 (Thread 0x7f773affd640 (LWP 1892)):
#0  0x00007f77481d06b2 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f77415a3f1b in  () at /usr/lib64/dri/i965_dri.so
#2  0x00007f77415a3767 in  () at /usr/lib64/dri/i965_dri.so
#3  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 6 (Thread 0x7f773b7fe640 (LWP 1891)):
#0  0x00007f77481d06b2 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f77415a3f1b in  () at /usr/lib64/dri/i965_dri.so
#2  0x00007f77415a3767 in  () at /usr/lib64/dri/i965_dri.so
#3  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 5 (Thread 0x7f773bfff640 (LWP 1890)):
#0  0x00007f77481d06b2 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f77415a3f1b in  () at /usr/lib64/dri/i965_dri.so
#2  0x00007f77415a3767 in  () at /usr/lib64/dri/i965_dri.so
#3  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 4 (Thread 0x7f7740ec0640 (LWP 1889)):
#0  0x00007f77481d06b2 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f77415a3f1b in  () at /usr/lib64/dri/i965_dri.so
#2  0x00007f77415a3767 in  () at /usr/lib64/dri/i965_dri.so
#3  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 3 (Thread 0x7f7743436640 (LWP 1888)):
#0  0x00007f7748c8583f in poll () at /lib64/libc.so.6
#1  0x00007f774764500e in  () at /usr/lib64/libglib-2.0.so.0
#2  0x00007f774764512f in g_main_context_iteration () at /usr/lib64/libglib-2.0.so.0
#3  0x00007f7749269a9b in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#4  0x00007f7749210eeb in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#5  0x00007f7749031c9e in QThread::exec() () at /usr/lib64/libQt5Core.so.5
#6  0x00007f774a8e22d5 in  () at /usr/lib64/libQt5Qml.so.5
#7  0x00007f7749032de1 in  () at /usr/lib64/libQt5Core.so.5
#8  0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#9  0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 2 (Thread 0x7f77446b7640 (LWP 1727)):
#0  0x00007fff25de0a57 in  ()
#1  0x00007f7748c57755 in clock_gettime@GLIBC_2.2.5 () at /lib64/libc.so.6
#2  0x00007f7749269391 in  () at /usr/lib64/libQt5Core.so.5
#3  0x00007f7749267c69 in QTimerInfoList::updateCurrentTime() () at /usr/lib64/libQt5Core.so.5
#4  0x00007f7749268245 in QTimerInfoList::timerWait(timespec&) () at /usr/lib64/libQt5Core.so.5
#5  0x00007f77492697ee in  () at /usr/lib64/libQt5Core.so.5
#6  0x00007f77476444e2 in g_main_context_prepare () at /usr/lib64/libglib-2.0.so.0
#7  0x00007f7747644f3b in  () at /usr/lib64/libglib-2.0.so.0
#8  0x00007f774764512f in g_main_context_iteration () at /usr/lib64/libglib-2.0.so.0
#9  0x00007f7749269a9b in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#10 0x00007f7749210eeb in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#11 0x00007f7749031c9e in QThread::exec() () at /usr/lib64/libQt5Core.so.5
#12 0x00007f7749c387c7 in  () at /usr/lib64/libQt5DBus.so.5
#13 0x00007f7749032de1 in  () at /usr/lib64/libQt5Core.so.5
#14 0x00007f77481ca3e9 in start_thread () at /lib64/libpthread.so.0
#15 0x00007f7748c90943 in clone () at /lib64/libc.so.6

Thread 1 (Thread 0x7f7746b6a840 (LWP 1717)):
[KCrash Handler]
#4  0x00007f7748bcca65 in raise () at /lib64/libc.so.6
#5  0x00007f7748bb5864 in abort () at /lib64/libc.so.6
#6  0x00007f7748ff80f7 in  () at /usr/lib64/libQt5Core.so.5
#7  0x00007f7744c87ed9 in  () at /usr/lib64/libQt5WaylandClient.so.5
#8  0x00007f7744c95dce in QtWaylandClient::QWaylandDisplay::flushRequests() () at /usr/lib64/libQt5WaylandClient.so.5
#9  0x00007f7749248b40 in  () at /usr/lib64/libQt5Core.so.5
#10 0x00007f7749269abc in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#11 0x00007f7749210eeb in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () at /usr/lib64/libQt5Core.so.5
#12 0x00007f7749219160 in QCoreApplication::exec() () at /usr/lib64/libQt5Core.so.5
#13 0x000055d82547df92 in  ()
#14 0x00007f7748bb7152 in __libc_start_main () at /lib64/libc.so.6
#15 0x000055d82547eaee in  ()
[Inferior 1 (process 1717) detached]
Comment 4 Walther Pelser 2020-12-25 08:43:51 UTC
More observation in the last days shows, that this occurs nearly (?) always, when the firefox-window remains open during hibernate and is restored again during resume. So the resume-process needs much more time than usual.
Comment 5 Walther Pelser 2020-12-27 10:31:47 UTC
There are a lot of firefox-bugs related to kde (see https://bugzilla.mozilla.org/show_bug.cgi?id=1678125 / meta [KDE] KDE issues tracking bug). 
This bug seems to belong to this group too and I filed this bug to bugs.kde.org too (https://bugzilla.mozilla.org/show_bug.cgi?id=1684203 / firefox causes restart of kde plasmashell during resume on wayland).
Most of the problems with firefox for me are caused on my machine by this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1678807 / can´t resize window on wayland. This bug may cause an instability of kde/plasma, as I think.
Comment 6 Nate Graham 2021-01-05 17:30:21 UTC
Let's not mix issues; a crash is different from high CPU usage.

The plasmashell process itself is consuming CPU? Or some other process is?
Comment 7 Walther Pelser 2021-01-06 15:09:46 UTC
Hi Nate!
This crash was a few weeks ago, at that time I guessed, that this was connected to the bug, described above. Ca. two weeks ago there a similar crash too, there was no report, but a notification on my screen, that plasmashell was changed or installed, but nothing was changed, as I could remember.
If needed this crash-report could become ignored, if it does not fit to the bug.
I cannot find a log file, what kde (plasmashell) does after resume from hibernation is finished by the kernel. It would be nice, if you could help me to find one, so that I can file a more precise description. I can only see a lock screen and after that the plasma screen and the described CPU-usage and the delay during resume from hibernation.
Comment 8 Bug Janitor Service 2021-01-21 04:33:18 UTC
Dear Bug Submitter,

This bug has been in NEEDSINFO status with no change for at least
15 days. Please provide the requested information as soon as
possible and set the bug status as REPORTED. Due to regular bug
tracker maintenance, if the bug is still in NEEDSINFO status with
no change in 30 days the bug will be closed as RESOLVED > WORKSFORME
due to lack of needed information.

For more information about our bug triaging procedures please read the
wiki located here:
https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging

If you have already provided the requested information, please
mark the bug as REPORTED so that the KDE team knows that the bug is
ready to be confirmed.

Thank you for helping us make KDE software even better for everyone!
Comment 10 Walther Pelser 2021-02-26 08:10:51 UTC
(In reply to Nate Graham from comment #9)
> Yeah you can retrieve it using coredumpctl. See
> https://community.kde.org/Guidelines_and_HOWTOs/Debugging/
> How_to_create_useful_crash_reports#Retrieving_a_backtrace_using_coredumpctl

sad, but true: /var/lib/systemd/coredump is empty. no coredump available.
Comment 11 Nate Graham 2021-02-26 15:56:00 UTC
Oh that sucks. :(

Then I'm afraid this is not actionable, sorry. If it happens again, please get a backtrace with debug symbols. Thanks!
Comment 12 Walther Pelser 2021-02-27 12:24:48 UTC
(In reply to Nate Graham from comment #11)
> Oh that sucks. :(
> 
> Then I'm afraid this is not actionable, sorry. If it happens again, please
> get a backtrace with debug symbols. Thanks!

My comment is: It was a bug!
The behavior mentioned in the description above, seems not to be reproducible since plasma 5.21.0-1.1 / 5.79.0-1.1. any more.
Comment 13 Walther Pelser 2021-03-07 11:20:45 UTC
Today, I got this: (no coredump available)
Application: Plasma (plasmashell), signal: Aborted

[KCrash Handler]
#4  0x00007f91cbc95495 in raise () at /lib64/libc.so.6
#5  0x00007f91cbc7e864 in abort () at /lib64/libc.so.6
#6  0x00007f91cc0fd0e7 in qt_message_fatal (message=<synthetic pointer>..., context=...) at global/qlogging.cpp:1914
#7  QMessageLogger::fatal(char const*, ...) const (this=this@entry=0x7fff07daef80, msg=msg@entry=0x7f91ca3720b8 "The Wayland connection broke. Did the Wayland compositor die?") at global/qlogging.cpp:893
#8  0x00007f91ca2edf69 in QtWaylandClient::QWaylandDisplay::checkError() const (this=<optimized out>) at qwaylanddisplay.cpp:209
#9  QtWaylandClient::QWaylandDisplay::checkError() const (this=<optimized out>) at qwaylanddisplay.cpp:204
#10 0x00007f91ca2fce0a in QtWaylandClient::QWaylandDisplay::flushRequests() (this=0x557e2200e230) at qwaylanddisplay.cpp:222
#11 0x00007f91cc34f980 in doActivate<false>(QObject*, int, void**) (sender=0x557e22036d50, signal_index=4, argv=0x7fff07daf090, argv@entry=0x0) at kernel/qobject.cpp:3898
#12 0x00007f91cc348c60 in QMetaObject::activate(QObject*, QMetaObject const*, int, void**) (sender=sender@entry=0x557e22036d50, m=m@entry=0x7f91cc5fd0e0 <QAbstractEventDispatcher::staticMetaObject>, local_signal_index=local_signal_index@entry=1, argv=argv@entry=0x0) at kernel/qobject.cpp:3946
#13 0x00007f91cc315ec3 in QAbstractEventDispatcher::awake() (this=this@entry=0x557e22036d50) at .moc/moc_qabstracteventdispatcher.cpp:149
#14 0x00007f91cc3708fc in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) (this=0x557e22036d50, flags=...) at kernel/qeventdispatcher_glib.cpp:430
#15 0x00007f91cc317ceb in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) (this=this@entry=0x7fff07daf1b0, flags=..., flags@entry=...) at ../../include/QtCore/../../src/corelib/global/qflags.h:69
#16 0x00007f91cc31ff60 in QCoreApplication::exec() () at ../../include/QtCore/../../src/corelib/global/qflags.h:121
#17 0x0000557e21072224 in main(int, char**) (argc=<optimized out>, argv=0x7fff07daf3b0) at /usr/src/debug/plasma5-workspace-5.21.1-1.1.x86_64/shell/main.cpp:247
[Inferior 1 (process 1737) detached]
Comment 14 Walther Pelser 2021-03-07 13:57:58 UTC
Now plasma is 5.21.1 /5.79.0
This issue seems to be related to firefox (especially beta and nightly), but I have no possibility to proof it.
Comment 15 Nate Graham 2021-03-07 14:16:51 UTC
> #7  QMessageLogger::fatal(char const*, ...) const (this=this@entry=0x7fff07daef80, msg=msg@entry=0x7f91ca3720b8 "The Wayland connection broke. Did the Wayland compositor die?") at global/qlogging.cpp:893
This means that kwin_wayland crashed. When it does so, all apps using it (such as Plasma) will crash as well with this error message. See https://bugreports.qt.io/browse/QTBUG-85631. You'll need to get a log for the KWin crash.
Comment 16 Walther Pelser 2021-03-07 15:54:05 UTC
Created attachment 136460 [details]
systemd-coredump 07.03.21 12:22.txt

This is, what I found
Comment 17 Walther Pelser 2021-03-07 15:57:09 UTC
Created attachment 136461 [details]
search 30.000 lines of yournald item "die"

"plasmashell	The Wayland connection broke. Did the Wayland compositor die?"
This line is nothing special for me , as I have very often seen it before.
Comment 18 Walther Pelser 2021-03-07 16:01:23 UTC
maybe the attachments could help Could not delete "needsinfo" and change "backtrace"
Comment 19 Nate Graham 2021-03-07 16:04:13 UTC
Yes, that line means that the compositor (KWin) crashed. And like I mentioned, when it crashes, all apps crash too. The backtrace for the app is therefore not useful because we know why the app crashed. What we need to know is why the compositor crashed. So for that, we need a backtrace of the compositor's crash log. The latest thing you attached is from plasmashell as well. You'll need to find the crash for kwin_wayland instead. See https://community.kde.org/Guidelines_and_HOWTOs/Debugging/How_to_create_useful_crash_reports#Retrieving_a_backtrace_using_coredumpctl
Comment 20 Walther Pelser 2021-03-08 17:49:57 UTC
During the last update of plasma I got all ! these coredumps:
Thu 2021-03-04 17:11:08 CET    5508  1000   100  11 present   /usr/bin/pipewire-media-session
Thu 2021-03-04 16:51:16 CET    2150  1000   100   6 present   /usr/bin/akonadi_ical_resource
Thu 2021-03-04 16:51:12 CET    1900  1000   100   6 present   /usr/bin/kded5
Thu 2021-03-04 16:51:11 CET    1939  1000   100   6 present   /usr/lib64/libexec/org_kde_powerdevil
Thu 2021-03-04 16:51:11 CET    2126  1000   100   6 present   /usr/bin/akonadi_akonotes_resource
Thu 2021-03-04 16:51:11 CET    2163  1000   100   6 present   /usr/bin/akonadi_mailfilter_agent
Thu 2021-03-04 16:51:10 CET    2171  1000   100   6 present   /usr/bin/akonadi_sendlater_agent
Thu 2021-03-04 16:51:10 CET    2125  1000   100   6 present   /usr/bin/akonadi_akonotes_resource
Thu 2021-03-04 16:51:10 CET    2174  1000   100   6 present   /usr/bin/akonadi_unifiedmailbox_agent
Thu 2021-03-04 16:51:09 CET    2128  1000   100   6 present   /usr/bin/akonadi_archivemail_agent
Thu 2021-03-04 16:51:08 CET    2142  1000   100   6 present   /usr/bin/akonadi_followupreminder_agent
Thu 2021-03-04 16:51:08 CET    1991  1000   100   6 present   /usr/bin/korgac
Thu 2021-03-04 16:51:07 CET    2143  1000   100   6 present   /usr/bin/akonadi_ical_resource
Thu 2021-03-04 16:51:06 CET    2170  1000   100   6 present   /usr/bin/akonadi_notes_agent
Thu 2021-03-04 16:51:06 CET    2162  1000   100   6 present   /usr/bin/akonadi_maildispatcher_agent
Thu 2021-03-04 16:51:05 CET    1956  1000   100   6 present   /usr/lib64/libexec/kdeconnectd
Thu 2021-03-04 16:51:04 CET    2167  1000   100   6 present   /usr/bin/akonadi_newmailnotifier_agent
Thu 2021-03-04 16:51:03 CET    2151  1000   100   6 present   /usr/bin/akonadi_indexing_agent
Thu 2021-03-04 16:51:03 CET    2158  1000   100   6 present   /usr/bin/akonadi_maildir_resource
Thu 2021-03-04 16:51:03 CET    2148  1000   100   6 present   /usr/bin/akonadi_ical_resource
Thu 2021-03-04 16:51:03 CET    2127  1000   100   6 present   /usr/bin/akonadi_akonotes_resource
Thu 2021-03-04 16:51:02 CET   30003  1000   100   6 present   /usr/bin/plasmashell
Thu 2021-03-04 16:51:01 CET    2136  1000   100   6 present   /usr/bin/akonadi_contacts_resource
Thu 2021-03-04 16:51:00 CET    2140  1000   100   6 present   /usr/bin/akonadi_contacts_resource
Thu 2021-03-04 16:51:00 CET    2166  1000   100   6 present   /usr/bin/akonadi_migration_agent
Thu 2021-03-04 16:50:59 CET    2139  1000   100   6 present   /usr/bin/akonadi_contacts_resource
Thu 2021-03-04 16:50:59 CET    2138  1000   100   6 present   /usr/bin/akonadi_contacts_resource
Thu 2021-03-04 16:50:59 CET    2129  1000   100   6 present   /usr/bin/akonadi_birthdays_resource
Thu 2021-03-04 16:50:59 CET    2137  1000   100   6 present   /usr/bin/akonadi_contacts_resource
Thu 2021-03-04 16:49:37 CET    1745  1000   100  11 truncated /usr/bin/kwin_wayland
Thu 2021-03-04 16:47:54 CET   24005  1000   100  11 present   /usr/bin/plasmashell
Thu 2021-03-04 16:19:02 CET    1933  1000   100   6 present   /usr/bin/plasmashell
Then I reboot the pc. In this list there is a coredump of kwin_wayland, but this does NOT fit to comment #13, as it was 2 days before. But as it is from kwin_wayland it will be mentioned here.
Comment 21 Walther Pelser 2021-03-08 17:53:32 UTC
Created attachment 136493 [details]
kwin_wayland coredump

Seems not very usefully
Comment 22 Walther Pelser 2021-03-08 17:58:44 UTC
Created attachment 136496 [details]
wayland-session.log

My problems are always connected with sddm, or seem to be so. So I added this file. It contains unusual messages. for me.
Comment 23 Nate Graham 2021-03-08 19:52:40 UTC
Thanks anyway!
Comment 24 Walther Pelser 2021-03-17 15:38:35 UTC
Created attachment 136793 [details]
plasmashell coredumpctl 17.03.21.txt
Comment 25 Walther Pelser 2021-03-31 15:03:27 UTC
Hi Nate!
Today I tried to force a plasmashell crash by hibernating with open FF nightly 89.0a1. After resume from hibernate plasmashell crashed again as expected. There is a coredumpctl of usr/bin/plasmashell and a plasmashell-20210331-163806.kcrash, the second seems to be send to kde. 
If you are interested, please let me know.
Comment 26 Vlad Zahorodnii 2021-03-31 15:40:52 UTC
Can you run plasma with WAYLAND_DEBUG=1 and try to reproduce the crash?

    env WAYLAND_DEBUG=1 plasmashell --replace > log.txt 2>&1

Please redact any sensitive information in the log.
Comment 27 David Redondo 2022-04-19 13:56:27 UTC
waiting for wayland log
Comment 28 Bug Janitor Service 2022-05-04 04:35:14 UTC
Dear Bug Submitter,

This bug has been in NEEDSINFO status with no change for at least
15 days. Please provide the requested information as soon as
possible and set the bug status as REPORTED. Due to regular bug
tracker maintenance, if the bug is still in NEEDSINFO status with
no change in 30 days the bug will be closed as RESOLVED > WORKSFORME
due to lack of needed information.

For more information about our bug triaging procedures please read the
wiki located here:
https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging

If you have already provided the requested information, please
mark the bug as REPORTED so that the KDE team knows that the bug is
ready to be confirmed.

Thank you for helping us make KDE software even better for everyone!
Comment 29 Bug Janitor Service 2022-05-19 04:35:29 UTC
This bug has been in NEEDSINFO status with no change for at least
30 days. The bug is now closed as RESOLVED > WORKSFORME
due to lack of needed information.

For more information about our bug triaging procedures please read the
wiki located here:
https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging

Thank you for helping us make KDE software even better for everyone!