Summary: | Crash when importing alarms | ||
---|---|---|---|
Product: | [Applications] kalarm | Reporter: | Jiri Palecek <jpalecek> |
Component: | general | Assignee: | David Jarvie <djarvie> |
Status: | RESOLVED FIXED | ||
Severity: | normal | ||
Priority: | NOR | ||
Version First Reported In: | unspecified | ||
Target Milestone: | --- | ||
Platform: | Other | ||
OS: | Linux | ||
Latest Commit: | 1cefdb07c558721dea7a90a2216b61d21425a8d5 | Version Fixed In: | 19.08.1 |
Sentry Crash Report: | |||
Attachments: |
proposed patch
This patch looks better |
Description
Jiri Palecek
2019-08-05 00:23:29 UTC
BTW valgrind says this about the problem: ==23380== Invalid read of size 4 ==23380== at 0x4BF3134: KAlarmCal::KAEvent::category() const (kaevent.cpp:1769) ==23380== by 0x1B0C31: AlarmCalendar::events(Akonadi::Collection const&, QFlags<KAlarmCal::CalEvent::Type>) const (alarmcalendar.cpp:1340) ==23380== by 0x1B0EAA: events (alarmcalendar.h:68) ==23380== by 0x1B0EAA: AlarmCalendar::checkForDisabledAlarms() (alarmcalendar.cpp:1439) ==23380== by 0x1B22F0: AlarmCalendar::removeKAEvents(long long, bool, QFlags<KAlarmCal::CalEvent::Type>) (alarmcalendar.cpp:502) ==23380== by 0x756C483: QMetaObject::activate(QObject*, int, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x756C90C: QMetaObject::activate(QObject*, QMetaObject const*, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x2A2B91: AkonadiModel::collectionStatusChanged(Akonadi::Collection const&, AkonadiModel::Change, QVariant const&, bool) (moc_akonadimodel.cpp:416) ==23380== by 0x268546: AkonadiModel::setCollectionChanged(Akonadi::Collection const&, QSet<QByteArray> const&, bool) (akonadimodel.cpp:1707) ==23380== by 0x2A90E5: slotCollectionChanged (akonadimodel.h:261) ==23380== by 0x2A90E5: AkonadiModel::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) (moc_akonadimodel.cpp:214) ==23380== by 0x756C33A: QMetaObject::activate(QObject*, int, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x756C90C: QMetaObject::activate(QObject*, QMetaObject const*, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x5431EB5: Akonadi::Monitor::collectionChanged(Akonadi::Collection const&, QSet<QByteArray> const&) (in /usr/lib/i386-linux-gnu/libKF5AkonadiCore.so.5.9.3.abi2) ==23380== Address 0x1081c7b8 is 0 bytes inside a block of size 4 free'd ==23380== at 0x480BD67: operator delete(void*) (in /usr/lib/i386-linux-gnu/valgrind/vgpreload_memcheck-x86-linux.so) ==23380== by 0x1B209E: AlarmCalendar::removeKAEvents(long long, bool, QFlags<KAlarmCal::CalEvent::Type>) (alarmcalendar.cpp:485) ==23380== by 0x756C483: QMetaObject::activate(QObject*, int, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x756C90C: QMetaObject::activate(QObject*, QMetaObject const*, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x2A2B91: AkonadiModel::collectionStatusChanged(Akonadi::Collection const&, AkonadiModel::Change, QVariant const&, bool) (moc_akonadimodel.cpp:416) ==23380== by 0x268546: AkonadiModel::setCollectionChanged(Akonadi::Collection const&, QSet<QByteArray> const&, bool) (akonadimodel.cpp:1707) ==23380== by 0x2A90E5: slotCollectionChanged (akonadimodel.h:261) ==23380== by 0x2A90E5: AkonadiModel::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) (moc_akonadimodel.cpp:214) ==23380== by 0x756C33A: QMetaObject::activate(QObject*, int, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x756C90C: QMetaObject::activate(QObject*, QMetaObject const*, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x5431EB5: Akonadi::Monitor::collectionChanged(Akonadi::Collection const&, QSet<QByteArray> const&) (in /usr/lib/i386-linux-gnu/libKF5AkonadiCore.so.5.9.3.abi2) ==23380== by 0x543A461: Akonadi::MonitorPrivate::emitCollectionNotification(Akonadi::Protocol::CollectionChangeNotification const&, Akonadi::Collection const&, Akonadi::Collection const&, Akonadi::Collection const&) (in /usr/lib/i386-linux-gnu/libKF5AkonadiCore.so.5.9.3.abi2) ==23380== by 0x543FD3F: Akonadi::MonitorPrivate::emitNotification(QSharedPointer<Akonadi::Protocol::ChangeNotification> const&) (in /usr/lib/i386-linux-gnu/libKF5AkonadiCore.so.5.9.3.abi2) ==23380== Block was alloc'd at ==23380== at 0x480ACAB: operator new(unsigned int) (in /usr/lib/i386-linux-gnu/valgrind/vgpreload_memcheck-x86-linux.so) ==23380== by 0x1B3A5A: AlarmCalendar::slotEventChanged(AkonadiModel::Event const&) (alarmcalendar.cpp:564) ==23380== by 0x1B3CAE: AlarmCalendar::slotEventsAdded(QList<AkonadiModel::Event> const&) (alarmcalendar.cpp:530) ==23380== by 0x756C483: QMetaObject::activate(QObject*, int, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x756C90C: QMetaObject::activate(QObject*, QMetaObject const*, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x2A2BF1: AkonadiModel::eventsAdded(QList<AkonadiModel::Event> const&) (moc_akonadimodel.cpp:423) ==23380== by 0x268D5C: AkonadiModel::slotRowsInserted(QModelIndex const&, int, int) (akonadimodel.cpp:1634) ==23380== by 0x756C483: QMetaObject::activate(QObject*, int, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x756C90C: QMetaObject::activate(QObject*, QMetaObject const*, int, void**) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x74F05C8: QAbstractItemModel::rowsInserted(QModelIndex const&, int, int, QAbstractItemModel::QPrivateSignal) (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x74F8138: QAbstractItemModel::endInsertRows() (in /usr/lib/i386-linux-gnu/sse2/libQt5Core.so.5.11.3) ==23380== by 0x5522231: ??? (in /usr/lib/i386-linux-gnu/libKF5AkonadiCore.so.5.9.3.abi2) ==23380== Thank you for the good quality bug report. Even with the information you provided, it isn't clear what went wrong. It's possible (but not certain) that if any events have UIDs which are duplicated between different active alarm calendar files, this may be able to trigger the crash. To help narrow down the cause, could you please check for duplicated event UIDs as follows: Check all the *active* alarm calendar files which were enabled in the left pane of the KAlarm main window, and also the calendar which you were activating when the crash occurred. To do this, execute the command grep ^UID: file1 file2 ... | sort | uniq -c | grep -v ^1 (substituting the actual file names, of course) This should output all event UIDs which are duplicated. If there are any duplicated UIDs, please check whether any or all of them occur in the calendar which caused the crash, by grepping within that file for the duplicated UIDs. Oops, the command should be grep ^UID: file1 file2 ... | sort | uniq -c | grep -v "^ *1 " The valgrind crash is in fact different from the original crash. The valgrind crash shows that multiple calendar status changes are being received in AlarmCalendar, before the processing of the previous one has completed. The fix for this will be to ensure that these status indications are queued so that they are never processed in parallel. The original crash only shows one calendar status change, so my previous Comment #2 and Comment #3 are still valid. Hello, thanks for your effort. Yes, there are duplicate items: $ grep ^UID ~/.kde/share/apps/kalarm/*.ics | sort -k 3 -t: /home/jirka/.kde/share/apps/kalarm/calendar.ics:UID:b355410c-07af-4ded-abf3-d887430000a1 /home/jirka/.kde/share/apps/kalarm/displaying.ics:UID:KAlarm-disp-1593075078.221 /home/jirka/.kde/share/apps/kalarm/expired.ics:UID:KAlarm-exp-959849771.278 /home/jirka/.kde/share/apps/kalarm/expired.ics:UID:KAlarm-exp-959849771.278 /home/jirka/.kde/share/apps/kalarm/expired.ics:UID:KAlarm-exp-981216950.140 /home/jirka/.kde/share/apps/kalarm/expired.ics:UID:KAlarm-exp-981216950.140 /home/jirka/.kde/share/apps/kalarm/calendar.ics:UID:KAlarm-1906626020.755 /home/jirka/.kde/share/apps/kalarm/calendar.ics:UID:KAlarm-727370443.645 /home/jirka/.kde/share/apps/kalarm/calendar.ics:UID:6e28d948-c6ba-4127-ac36-8a01b672de95 /home/jirka/.kde/share/apps/kalarm/calendar.ics:UID:9f3cd70a-9e1d-4d08-81ae-8dea00eaa34c (In reply to David Jarvie from comment #4) > The valgrind crash is in fact different from the original crash. The I don't think so; yes, the backtraces (their upper frames) are different, however, I think this isn't important. The immediate place of the crash is the same, and the reason - a dangling pointer - is the same in both traces. I discovered the dangling pointer also in gdb. My hypothesis is, that while the native run goes through the same dangling pointer dereference as the valgrind run, natively it goes through because the memory hasn't been overwritten yet. It just crashes later when the freed memory gets reused. > valgrind crash shows that multiple calendar status changes are being > received in AlarmCalendar, before the processing of the previous one has > completed. What you mean? I don't see any indication of that from valgrind, IMHO it just crashed inside the same removeKAEvents call that deleted the data. Given that, I looked at AlarmCalendar::removedKAevents and it really seems fishy. It haphazardly deletes some events, then it randomly removes the collection which might or might not contain the pointers to data which were deleted. It seems this could cause the crash I see. Anyway, I have prepared a patch that fixes this method, by only leaving valid pointers in mResourceMap. It makes the crash go away (although I got another crash under valgrind, accompanied by the message that connection to akonadi was lost, possibly because execution under valgrind was so slow). I will post the patch here, although I was planning to send it through phabricator. Created attachment 122027 [details]
proposed patch
Will send it through Phabricator in a few days, just sending so you can comment on it now.
Comment on attachment 122027 [details]
proposed patch
Disregard that
Created attachment 122029 [details]
This patch looks better
Yes, there is a bug in removeKAEvents, and your patch looks good. I'll wait for you to submit it in Phabricator. The valgrind trace shows that a calendar added is being processed, and that that is interrupted (while a 'new' is being executed) by the removal of a calendar, and that in turn is interrupted (while a 'delete' is being executed) by another removal of a calendar. Obviously valgrind makes this more likely to happen since the code runs slower, but it could potentially still happen in normal use. The code in AlarmCalendar is not designed to work in such circumstances (for example, removeKAEvents deletes each KAEvent instance individually before references to them are finally all removed from mResourceMap), so a queuing mechanism is needed. (In reply to David Jarvie from comment #9) > Yes, there is a bug in removeKAEvents, and your patch looks good. I'll wait > for you to submit it in Phabricator. I've posted it. > The valgrind trace shows that a calendar added is being processed, and that > that is interrupted (while a 'new' is being executed) by the removal of a > calendar, and that in turn is interrupted (while a 'delete' is being > executed) by another removal of a calendar. I think this is a misunderstanding, the valgrind report for a wrong access contains three traces, one at the time of the access, second one at the time of the block being freed, third one at the time of its allocation. It isn't one backtrace of new interrupted while being executed. It would be very odd if that actually could happen. I admit it's not entirely legible here. I'll try to use attachments the next time, although it's tricky when you just copy output from terminal. Ok, that makes sense. Thanks for your Phabricator patch, which I have accepted. Can you please commit your Phabricator patch so that this bug report can be closed? Thanks. (In reply to David Jarvie from comment #12) > Can you please commit your Phabricator patch so that this bug report can be > closed? Thanks. Hardly, I have no access. Ok, I've committed it for you. Thanks for the fix. Commit 1cefdb07c558721dea7a90a2216b61d21425a8d5 to Applications/19.08 branch. |