Bug 406584 - Digikam crashes after scanning for new items
Summary: Digikam crashes after scanning for new items
Status: RESOLVED FIXED
Alias: None
Product: digikam
Classification: Applications
Component: Database-Scan (show other bugs)
Version: 6.1.0
Platform: Appimage Linux
: NOR crash
Target Milestone: ---
Assignee: Digikam Developers
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-04-16 00:04 UTC by MarcP
Modified: 2020-08-01 16:31 UTC (History)
3 users (show)

See Also:
Latest Commit:
Version Fixed In: 7.1.0


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description MarcP 2019-04-16 00:04:40 UTC
SUMMARY

I have been experiencing this issue for the last few weeks. After digikam scans for new items at the startup, usually when there were substantial changes in the collection, immediately closes itself, without warning or error message.

It only happens from time to time (maybe 25% of the time), and I have not been able to capture the precise error from a console (the few times I've launched it from a console didn't crash).

I'll try to stay alert and see if I can catch the error in a console the next time it happens.


SOFTWARE/OS VERSIONS
Ubuntu 18.04 with Gnome
digikam-6.1.0-git-20190404T203330-qtwebkit-x86-64.appimage
Comment 1 caulier.gilles 2019-04-16 03:45:40 UTC
Run the AppImage from a console with the "debug" argument:

"./digikam-6.1.0-git-20190404T203330-qtwebkit-x86-64.appimage debug"

A gdb backtrace will be generated when a crash appear. Copy and paste the contents here...

Gilles Caulier
Comment 2 MarcP 2019-04-16 10:29:30 UTC
Ok, will do for the next few days. I tried right now with the last updated version (digikam-6.2.0-git-20190416T091455-qtwebkit-x86-64.appimage), syncing a lot of changes and didn't crash. I will report back as soon as it happens again.
Comment 3 MarcP 2019-05-11 18:19:28 UTC
I think I finally captured the error in a debug console.

Here there are the last few lines of the output:



Digikam::ActionThreadBase::run: Action Thread run  1  new jobs
[New Thread 0x7fff94ff9700 (LWP 6204)]
[New Thread 0x7fff957fa700 (LWP 6205)]
Digikam::ActionThreadBase::run: Action Thread run  1  new jobs
[New Thread 0x7fff7bfff700 (LWP 6206)]
Digikam::ActionThreadBase::cancel: Cancel Main Thread
[New Thread 0x7fff7b7fe700 (LWP 6207)]
[Thread 0x7fff95ffb700 (LWP 6199) exited]
[New Thread 0x7fff95ffb700 (LWP 6208)]
Digikam::ActionThreadBase::slotJobFinished: One job is done
Digikam::ActionThreadBase::slotJobFinished: One job is done
[Thread 0x7fffacdea700 (LWP 6201) exited]
Digikam::ActionThreadBase::cancel: Cancel Main Thread
Digikam::ActionThreadBase::cancel: Cancel Main Thread
[Thread 0x7fffaedee700 (LWP 6202) exited]
[Thread 0x7fff977fe700 (LWP 6200) exited]
Digikam::ActionThreadBase::slotJobFinished: One job is done
[Thread 0x7fff96877700 (LWP 6203) exited]
Digikam::DImg::load: "/home/user2/Documents/FOTOS/whatsapp/IMG-20190418-WA0013.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/whatsapp/IMG-20190418-WA0011.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/whatsapp/IMG-20190418-WA0010.jpg"  : JPEG file identified
Digikam::ActionThreadBase::cancel: Cancel Main Thread
[Thread 0x7fff94ff9700 (LWP 6204) exited]
Digikam::ActionThreadBase::slotJobFinished: One job is done
[Thread 0x7fff957fa700 (LWP 6205) exited]
[New Thread 0x7fff957fa700 (LWP 6209)]

Thread 3 "QDBusConnection" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffd7fff700 (LWP 5538)]
0x00007fffe8d6e3c1 in _int_malloc (av=av@entry=0x7fffd0000020, 
    bytes=bytes@entry=28) at malloc.c:3612
3612	malloc.c: El fitxer o directori no existeix.
(gdb) 




In this case, Digikam became frozen, instead of just closing, I guess due to the "debug" mode.

The output is much much longer. I was only able to copy the last ~5000 lines. I didn't paste the whole thing here because of its length and due to privacy reasons, but I didn't observe anything special in it. However, I could provide it if you deem it necessary.
Comment 4 Maik Qualmann 2019-05-12 20:27:20 UTC
If it crashes really reproducibly in QDBusConnection, it also has something to do with the desktop system used. We should then compile the AppImage without QDBus.

Maik
Comment 5 MarcP 2019-05-19 16:02:07 UTC
I experienced this bug again while in debug mode. Like before, digikam crashed right after reaching 100% while scanning new items. It is always after many (hundreds) pictures have new metadata since the last session.

I also noticed that similar crashes happened occasionally when tagging a large number of pictures, but I don't know if it's related to this bug.



These are the last lines of the debug console before the crash, in case it helps:



[New Thread 0x7fffad5eb700 (LWP 16508)]
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20190116-WA0002.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20190116-WA0001.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20190116-WA0000.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181221-WA0001.jpg"  : JPEG file identified
Digikam::ActionThreadBase::setMaximumNumberOfThreads: Using  4  CPU core to run threads
[New Thread 0x7fffacdea700 (LWP 16509)]
Digikam::ActionThreadBase::setMaximumNumberOfThreads: Using  4  CPU core to run threads
Digikam::ActionThreadBase::run: Action Thread run  1  new jobs
[New Thread 0x7fff967fc700 (LWP 16510)]
[New Thread 0x7fff7bfff700 (LWP 16511)]
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181221-WA0000.jpg"  : JPEG file identified
Digikam::ActionThreadBase::run: Action Thread run  1  new jobs
[New Thread 0x7fff7b7fe700 (LWP 16512)]
Digikam::ActionThreadBase::cancel: Cancel Main Thread
[New Thread 0x7fff7affd700 (LWP 16513)]
[Thread 0x7fff97fff700 (LWP 16505) exited]
[New Thread 0x7fff97fff700 (LWP 16514)]
Digikam::ActionThreadBase::slotJobFinished: One job is done
Digikam::ActionThreadBase::slotJobFinished: One job is done
[Thread 0x7fff977fe700 (LWP 16507) exited]
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181121-WA0004.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181121-WA0003.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181121-WA0002.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181121-WA0001.jpg"  : JPEG file identified
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181118-WA0006.jpg"  : JPEG file identified
Digikam::ActionThreadBase::slotJobFinished: One job is done
[Thread 0x7fffacdea700 (LWP 16509) exited]
Digikam::DImg::load: "/home/user2/Documents/FOTOS/WhatsApp user3/IMG-20181118-WA0005.jpg"  : JPEG file identified
Digikam::ActionThreadBase::slotJobFinished: One job is done
[Thread 0x7fff967fc700 (LWP 16510) exited]
[New Thread 0x7fff967fc700 (LWP 16515)]

Thread 3 "QDBusConnection" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffd7fff700 (LWP 15635)]
0x00007fffe8d6e3c1 in _int_malloc (av=av@entry=0x7fffd0000020, 
    bytes=bytes@entry=28) at malloc.c:3612
3612	malloc.c: El fitxer o directori no existeix.
(gdb)
Comment 6 caulier.gilles 2019-05-19 16:14:37 UTC
It's clear that QDbus is the problem here :

Thread 3 "QDBusConnection" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffd7fff700 (LWP 15635)]
Comment 7 caulier.gilles 2019-05-19 16:15:54 UTC
Git commit bd9cb88e691e25fc5f810ea54ed3de01871f8368 by Gilles Caulier.
Committed on 19/05/2019 at 16:13.
Pushed by cgilles into branch 'master'.

disable DBUS support

M  +1    -1    project/bundles/appimage/03-build-digikam.sh

https://invent.kde.org/kde/digikam/commit/bd9cb88e691e25fc5f810ea54ed3de01871f8368
Comment 8 caulier.gilles 2020-08-01 14:35:28 UTC
digiKam 7.0.0 stable release is now published and now available as FlatPak:

https://www.digikam.org/news/2020-07-19-7.0.0_release_announcement/

We need a fresh feedback on this file using this version.

Thanks in advance

Gilles Caulier
Comment 9 MarcP 2020-08-01 15:39:15 UTC
I have not experienced that bug anymore. I would reopen it if I experience something weird again.
Comment 10 caulier.gilles 2020-08-01 16:31:54 UTC
Thanks for the feedback. I close this file now...