Summary: | Baloo crashes when finding a new file | ||
---|---|---|---|
Product: | [Frameworks and Libraries] frameworks-baloo | Reporter: | don bowman <don.waterloo+kde> |
Component: | Baloo File Daemon | Assignee: | Pinak Ahuja <pinak.ahuja> |
Status: | RESOLVED DUPLICATE | ||
Severity: | crash | CC: | arthur.marsh, aspotashev, cullmann, matthew, pinak.ahuja, rverschelde, theivorytower, zawertun |
Priority: | NOR | Keywords: | drkonqi |
Version: | 5.13.0 | ||
Target Milestone: | --- | ||
Platform: | Ubuntu | ||
OS: | Linux | ||
Latest Commit: | Version Fixed In: | ||
Attachments: | Valgrind output |
Description
don bowman
2015-09-04 11:40:54 UTC
Hi. Since this is reproducible, could you please see if it occurs when you run the `baloo_file` executable? If it does could you please run it under valgrind and paste the output? $ valgrind baloo_file Hi Vishesh, I've got this happening reliably for me. Relevant Valgrind output (too large for a comment, I'll attach the full thing): ==30783== ==30783== Conditional jump or move depends on uninitialised value(s) ==30783== at 0x4C2E945: _intel_fast_memcpy (vg_replace_strmem.c:929) ==30783== by 0x5A28EF1: memcpy (string3.h:53) ==30783== by 0x5A28EF1: Baloo::PostingCodec::decode(QByteArray const&) (postingcodec.cpp:42) ==30783== by 0x5A0F35F: Baloo::PostingDB::get(QByteArray const&) (postingdb.cpp:100) ==30783== by 0x5A24FC2: Baloo::WriteTransaction::commit() (writetransaction.cpp:286) ==30783== by 0x5A1AC72: Baloo::Transaction::commit() (transaction.cpp:262) ==30783== by 0x422869: Baloo::MetadataMover::moveFileMetadata(QString const&, QString const&) (metadatamover.cpp:58) ==30783== by 0x619C36A: call (qobject_impl.h:124) ==30783== by 0x619C36A: QMetaObject::activate(QObject*, int, int, void**) (qobject.cpp:3698) ==30783== by 0x426DB5: moved (moc_kinotify.cpp:330) ==30783== by 0x426DB5: KInotify::slotEvent(int) (kinotify.cpp:421) ==30783== by 0x619C36A: call (qobject_impl.h:124) ==30783== by 0x619C36A: QMetaObject::activate(QObject*, int, int, void**) (qobject.cpp:3698) ==30783== by 0x6221A1B: QSocketNotifier::activated(int, QSocketNotifier::QPrivateSignal) (moc_qsocketnotifier.cpp:134) ==30783== by 0x61A9192: QSocketNotifier::event(QEvent*) (qsocketnotifier.cpp:260) ==30783== by 0x616B3DB: notify (qcoreapplication.cpp:1038) ==30783== by 0x616B3DB: QCoreApplication::notifyInternal(QObject*, QEvent*) (qcoreapplication.cpp:965) ==30783== ==30783== Invalid read of size 16 ==30783== at 0x4C2E900: _intel_fast_memcpy (vg_replace_strmem.c:929) ==30783== by 0x5A28EF1: memcpy (string3.h:53) ==30783== by 0x5A28EF1: Baloo::PostingCodec::decode(QByteArray const&) (postingcodec.cpp:42) ==30783== by 0x5A0F35F: Baloo::PostingDB::get(QByteArray const&) (postingdb.cpp:100) ==30783== by 0x5A24FC2: Baloo::WriteTransaction::commit() (writetransaction.cpp:286) ==30783== by 0x5A1AC72: Baloo::Transaction::commit() (transaction.cpp:262) ==30783== by 0x422869: Baloo::MetadataMover::moveFileMetadata(QString const&, QString const&) (metadatamover.cpp:58) ==30783== by 0x619C36A: call (qobject_impl.h:124) ==30783== by 0x619C36A: QMetaObject::activate(QObject*, int, int, void**) (qobject.cpp:3698) ==30783== by 0x426DB5: moved (moc_kinotify.cpp:330) ==30783== by 0x426DB5: KInotify::slotEvent(int) (kinotify.cpp:421) ==30783== by 0x619C36A: call (qobject_impl.h:124) ==30783== by 0x619C36A: QMetaObject::activate(QObject*, int, int, void**) (qobject.cpp:3698) ==30783== by 0x6221A1B: QSocketNotifier::activated(int, QSocketNotifier::QPrivateSignal) (moc_qsocketnotifier.cpp:134) ==30783== by 0x61A9192: QSocketNotifier::event(QEvent*) (qsocketnotifier.cpp:260) ==30783== by 0x616B3DB: notify (qcoreapplication.cpp:1038) ==30783== by 0x616B3DB: QCoreApplication::notifyInternal(QObject*, QEvent*) (qcoreapplication.cpp:965) ==30783== Address 0x4065000 is not stack'd, malloc'd or (recently) free'd ==30783== KCrash: Attempting to start /usr/bin/baloo_file from kdeinit sock_file=/run/user/1000/kdeinit5__0 KCrash: Application 'baloo_file' crashing... ==30783== ==30783== HEAP SUMMARY: ==30783== in use at exit: 145,962,439 bytes in 691,847 blocks ==30783== total heap usage: 61,909,015 allocs, 61,217,168 frees, 12,624,450,230 bytes allocated ==30783== ==30783== LEAK SUMMARY: ==30783== definitely lost: 0 bytes in 0 blocks ==30783== indirectly lost: 0 bytes in 0 blocks ==30783== possibly lost: 3,337,148 bytes in 86 blocks ==30783== still reachable: 142,625,291 bytes in 691,761 blocks ==30783== suppressed: 0 bytes in 0 blocks ==30783== Rerun with --leak-check=full to see details of leaked memory ==30783== ==30783== For counts of detected and suppressed errors, rerun with: -v ==30783== Use --track-origins=yes to see where uninitialised values come from ==30783== ERROR SUMMARY: 681434 errors from 42 contexts (suppressed: 0 from 0) Killed But I think the root cause is something around LMDB. I got it to happen in gdb and poked around some. The address given back by mdb_get is invalid to start with. I tried to get GDB to break if rc != 0 (my desktop has Qt compiled without debugging, so the asserts disappear), but it didn't before it crashed again. Could mdb be returning invalid pointers? The pointers aren't anywhere close to an mmaped file. The size looks really large too (35768630 and 42781780). I'll see about getting the rc value out on crash. Is there anything else that can help? Also, I see what appears to be similar bugs about this popping up. If you'd like, I'll mark them as duplicates of this bug. Created attachment 97549 [details]
Valgrind output
So, LMDB is failing with error MDB_BAD_TXN, with a message of "Transaction must abort, has a child, or is invalid". Having got an debug version of Baloo installed, it turns out this gets printed earlier: ASSERT failure in PositionDB::put: "MDB_MAP_FULL: Environment mapsize limit reached", file /var/tmp/portage/kde-frameworks/baloo-5.19.0/work/baloo-5.19.0/src/engine/positiondb.cpp, line 80 KCrash: Attempting to start /usr/bin/baloo_file from kdeinit sock_file=/run/user/1000/kdeinit5__0 KCrash: Application 'baloo_file' crashing... Aborted For me, I have 6.5T worth of data in over 2 million files, so I'm not surprised I broke a limit. Would a system to recover from this error and resize the database be ok? *** Bug 361183 has been marked as a duplicate of this bug. *** *** Bug 361880 has been marked as a duplicate of this bug. *** *** Bug 361975 has been marked as a duplicate of this bug. *** *** Bug 362792 has been marked as a duplicate of this bug. *** Here we run into the db too large issue :/ Bug 364475 https://git.reviewboard.kde.org/r/128885/ *** This bug has been marked as a duplicate of bug 364475 *** |