In thread view, delete a message in a thread. It will crash. In thread view, delete a single message which is not belong to any thread, it will not crash. Delete messages in flat(non-threading) mode, it works well. I build and run kmail through kde-src build in Arch linux. No crash a month ago. I suspect it is highly related to client-side threading catch. See https://phabricator.kde.org/D1636. It crash because when checking pParent status, pParent is not valid any and it is not null. I am reading messagelib code but still not fully grasp the concurrency up till now. (gdb) l 453 } 454 } 455 456 qint32 Akonadi::MessageStatus::toQInt32() const 457 { 458 return mStatus; 459 } 460 461 void Akonadi::MessageStatus::fromQInt32(qint32 status) 462 { (gdb) p mStatus Cannot access memory at address 0x100000060 Reproducible: Always Steps to Reproduce: 1. Set a folder in thread view 2. Open message a in thread 3. Delete the message Actual Results: Thread 1 "kmail" received signal SIGSEGV, Segmentation fault. Expected Results: Message delete success. Thread 1 "kmail" received signal SIGSEGV, Segmentation fault. Akonadi::MessageStatus::toQInt32 (this=0x100000060) at /home/chaos/kdesrc/kde/kdepimlibs/akonadi-mime/src/messagestatus.cpp:458 458 return mStatus; (gdb) bt #0 Akonadi::MessageStatus::toQInt32 (this=0x100000060) at /home/chaos/kdesrc/kde/kdepimlibs/akonadi-mime/src/messagestatus.cpp:458 #1 0x00007ffff00001b6 in MessageList::Core::ModelPrivate::attachMessageToParent (this=this@entry=0xb4c6f0, pParent=pParent@entry=0x1f46120, mi=<optimized out>, attachOptions=attachOptions@entry=MessageList::Core::ModelPrivate::SkipCacheUpdate) at /home/chaos/kdesrc/kde/pim/messagelib/messagelist/src/core/model.cpp:2181 #2 0x00007ffff0001741 in MessageList::Core::ModelPrivate::viewItemJobStepInternalForJobPass2 (this=this@entry=0xb4c6f0, job=job@entry=0x1d30f00, elapsedTimer=...) at /home/chaos/kdesrc/kde/pim/messagelib/messagelist/src/core/model.cpp:2635 #3 0x00007ffff00038d4 in MessageList::Core::ModelPrivate::viewItemJobStepInternalForJob (this=this@entry=0xb4c6f0, job=job@entry=0x1d30f00, elapsedTimer=...) at /home/chaos/kdesrc/kde/pim/messagelib/messagelist/src/core/model.cpp:3471 #4 0x00007ffff0003dc3 in MessageList::Core::ModelPrivate::viewItemJobStepInternal (this=this@entry=0xb4c6f0) at /home/chaos/kdesrc/kde/pim/messagelib/messagelist/src/core/model.cpp:3758 #5 0x00007ffff000448c in MessageList::Core::ModelPrivate::viewItemJobStep (this=0xb4c6f0) at /home/chaos/kdesrc/kde/pim/messagelib/messagelist/src/core/model.cpp:3938 #6 0x00007ffff164585e in QMetaObject::activate(QObject*, int, int, void**) () from /usr/lib/libQt5Core.so.5 #7 0x00007ffff1652568 in QTimer::timerEvent(QTimerEvent*) () from /usr/lib/libQt5Core.so.5 #8 0x00007ffff1646303 in QObject::event(QEvent*) () from /usr/lib/libQt5Core.so.5 #9 0x00007ffff22f9e3c in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib/libQt5Widgets.so.5 #10 0x00007ffff23015b1 in QApplication::notify(QObject*, QEvent*) () from /usr/lib/libQt5Widgets.so.5 #11 0x00007ffff1619c80 in QCoreApplication::notifyInternal2(QObject*, QEvent*) () from /usr/lib/libQt5Core.so.5 #12 0x00007ffff166d51e in QTimerInfoList::activateTimers() () from /usr/lib/libQt5Core.so.5 #13 0x00007ffff166da41 in ?? () from /usr/lib/libQt5Core.so.5 #14 0x00007fffdc9e0dd7 in g_main_context_dispatch () from /usr/lib/libglib-2.0.so.0 #15 0x00007fffdc9e1040 in ?? () from /usr/lib/libglib-2.0.so.0 #16 0x00007fffdc9e10ec in g_main_context_iteration () from /usr/lib/libglib-2.0.so.0 #17 0x00007ffff166e57f in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQt5Core.so.5 ---Type <return> to continue, or q <return> to quit---
I added Dan who created this code (threading cache) Indeed I saw some crash too.
Dan some news ? I have several crashs from day. I think if there is not a solution I will revert cache threading for 5.3.0 to avoid it. Regards
*** Bug 367031 has been marked as a duplicate of this bug. ***
*** Bug 368092 has been marked as a duplicate of this bug. ***
*** Bug 366862 has been marked as a duplicate of this bug. ***
*** Bug 368150 has been marked as a duplicate of this bug. ***
*** Bug 368231 has been marked as a duplicate of this bug. ***
*** Bug 368323 has been marked as a duplicate of this bug. ***
*** Bug 368387 has been marked as a duplicate of this bug. ***
*** Bug 368400 has been marked as a duplicate of this bug. ***
Judging from all the other bug reports, this seems to be triggered by the removal of a message from a threaded message list. It has been reported for manual deletion of a message, manual moves of a message to another folder, applying a filter manually and "crashing in the background" (automatically applied filter on incoming mail, I guess). Suggestions for workarounds: avoid threaded view for a while, or at least don't move mails while in threaded view. This includes mail filters. Of course, I know that these are not very practical, sorry...
Thanks for the heads-up - a bad workaround is still better than none.
The pattern for me so far is that these crashes happen when I delete the root-email of a thread, with the thread unfolded.
Git commit c335c60684fb6de58fae567234c72277a3b1bf58 by Daniel Vrátil. Committed on 15/09/2016 at 08:42. Pushed by dvratil into branch 'Applications/16.08'. Expire dying parent from threading cache before processing children Fixes a crash in the Model when a thread leader is removed and ViewJob for its children is started to re-attach the subtree to a new parent node. The second pass would then get a pointer to the now-deleted parent from the threading cache leading to a crash eventually. This patch makes sure the parent is expired from the cache before the ViewJobs are started. The cache miss triggers actual threading calculation in Pass2 and Pass3 and updates our cache. FIXED-IN: 16.08.1 M +5 -1 messagelist/src/core/model.cpp M +1 -0 messagelist/src/core/threadingcache.h http://commits.kde.org/messagelib/c335c60684fb6de58fae567234c72277a3b1bf58
*** Bug 368496 has been marked as a duplicate of this bug. ***
*** Bug 368837 has been marked as a duplicate of this bug. ***
*** Bug 369035 has been marked as a duplicate of this bug. ***