Disconnected imap leaks very large amounts of memory and exhibits what must be O(n^2) or worse (perhaps exponential) behaviour when checking mail. It results in 100% RAM and CPU usage during a mail check. Valgrind writes hundreds of kb of kb of information about this, including memory corruption: ==32137== 1 errors in context 1 of 102: ==32137== Invalid read of size 4 ==32137== at 0x404C6C76: KMFolderCachedImap::getMessagesResult(KIO::Job*, bool) (/opt/qt-copy/include/qshared.h:50) ==32137== by 0x404C6001: KMFolderCachedImap::slotGetLastMessagesResult(KIO::Job*) (kmfoldercachedimap.cpp:946) ==32137== by 0x404CA38F: KMFolderCachedImap::qt_invoke(int, QUObject*) (/ opt/qt-copy/include/private/qucom_p.h:312) ==32137== by 0x41AF5548: QObject::activate_signal(QConnectionList*, QUObject*) (/opt/qt-copy/src/kernel/qobject.cpp:2333) ==32137== Address 0x50946FAC is 16 bytes inside a block of size 100 free'd ==32137== at 0x40027C2D: __builtin_delete (vg_replace_malloc.c:233) ==32137== by 0x40027C4B: operator delete(void*) (vg_replace_malloc.c:242) ==32137== by 0x40498E38: QMapPrivate<KIO::Job*, KMail::ImapAccountBase::jobData>::clear(QMapNode<KIO::Job*, KMail::ImapAccountBase::jobData>*) (/opt/qt-copy/include/qstring.h:848) ==32137== by 0x40498E52: QMapPrivate<KIO::Job*, KMail::ImapAccountBase::jobData>::clear(QMapNode<KIO::Job*, KMail::ImapAccountBase::jobData>*) (/opt/qt-copy/include/qmap.h:488) ==32137== ==32137== 1 errors in context 79 of 102: ==32137== Invalid read of size 4 ==32137== at 0x41E44C38: QMapPrivateBase::removeAndRebalance(QMapNodeBase*, QMapNodeBase*&, QMapNodeBase*&, QMapNodeBase*&) (/opt/qt-copy/src/tools/ qmap.cpp:129) ==32137== by 0x404C7820: KMFolderCachedImap::slotListResult(KIO::Job*) (/ opt/qt-copy/include/qmap.h:385) ==32137== by 0x404CA3E9: KMFolderCachedImap::qt_invoke(int, QUObject*) (/ opt/qt-copy/include/private/qucom_p.h:312) ==32137== by 0x41AF5548: QObject::activate_signal(QConnectionList*, QUObject*) (/opt/qt-copy/src/kernel/qobject.cpp:2333) ==32137== Address 0x50D4D0F4 is 0 bytes inside a block of size 100 free'd ==32137== at 0x40027C2D: __builtin_delete (vg_replace_malloc.c:233) ==32137== by 0x40027C4B: operator delete(void*) (vg_replace_malloc.c:242) ==32137== by 0x40498E38: QMapPrivate<KIO::Job*, KMail::ImapAccountBase::jobData>::clear(QMapNode<KIO::Job*, KMail::ImapAccountBase::jobData>*) (/opt/qt-copy/include/qstring.h:848) ==32137== by 0x40498B93: QMapPrivate<KIO::Job*, KMail::ImapAccountBase::jobData>::clear() (/opt/qt-copy/include/qmap.h:477) ==32137== 180308 bytes in 1924 blocks are definitely lost in loss record 128 of 131 ==32137== at 0x400279E7: __builtin_new (vg_replace_malloc.c:172) ==32137== by 0x40027A3E: operator new(unsigned) (vg_replace_malloc.c:185) ==32137== by 0x4056253E: KMFolderIndex::setIndexEntry(int, KMMessage*) (kmfolderindex.cpp:455) ==32137== by 0x40406EDC: KMFolder::unGetMsg(int) (kmfolder.cpp:547) ==32137== ==32137== ==32137== 2402789 bytes in 19903 blocks are still reachable in loss record 130 of 131 ==32137== at 0x40027AD3: __builtin_vec_new (vg_replace_malloc.c:197) ==32137== by 0x40027B2A: operator new[](unsigned) (vg_replace_malloc.c:210) ==32137== by 0x41E52BDE: internalLatin1ToUnicode(char const*, unsigned*, unsigned) (/opt/qt-copy/src/tools/qstring.cpp:1166) ==32137== by 0x41E537A2: QString::QString(char const*) (/opt/qt-copy/src/ tools/qstring.cpp:1473) ==32137== ==32137== ==32137== 4412184 bytes in 79129 blocks are still reachable in loss record 131 of 131 ==32137== at 0x400279E7: __builtin_new (vg_replace_malloc.c:172) ==32137== by 0x40027A3E: operator new(unsigned) (vg_replace_malloc.c:185) ==32137== by 0x41E52D07: QString::makeSharedNull() (/opt/qt-copy/src/tools/ qstring.cpp:1339) ==32137== by 0x41A0801F: QString::QString() (../include/qstring.h:840) ==32137== Profiling shows huge amounts of time are spent in the event loop - I think timers are going crazy - and very little network traffic is occurring. It is also painting the screen far too many times, and the statusbar progress bar doesn't update properly.
George,how about 67894? Can we collapse those two bugs into one? This one has some nice backtrace info, so maybe we should mark 67894 as a duplicate of this one?
*** Bug 67894 has been marked as a duplicate of this bug. ***
*** Bug 68183 has been marked as a duplicate of this bug. ***
There were indeed a couple of O(nĀ²) problems in there. I've fixed those now, but not the leaks yet.
There has been a big speed improvement, but it's still not quite "shippable" yet. It still takes far too long to synchronize my folders (10 minutes or so). The time factor is not bandwidth related. In fact very little bandwidth is being used. It seems like X server traffic and cpu usage are the two biggest factors.
As it's marked experimental now, I doubt it's release critical anymore
Subject: Re: dimap uses 100% ram and cpu, corruption On Wednesday 07 January 2004 09:48, Stephan Kulow wrote: > As it's marked experimental now, I doubt it's release critical anymore Agreed. :-) (though it's still unfortunate...)
I'm setting this to normal now - that something takes a long time certainly doesn't have precedence over the other bugs that are currently in the bugtracker. The current strategy for fixing it is to make the online IMAP use maildir also, and introduce a common base class that moves the X-UID map to the KMail index. The CPU and disk usage is because the IMAP UID is only available by opening the full mail and reading it. Once we can work on headers only, things will be *much* faster. And there's NO way you can convince me to touch KMails index format before 3.2 :-)
*** Bug has been marked as fixed ***.