Bug 198063 - SCAN : digikam startup is extremely slow
Summary: SCAN : digikam startup is extremely slow
Status: RESOLVED FIXED
Alias: None
Product: digikam
Classification: Applications
Component: Database-Scan (show other bugs)
Version: 0.10.0
Platform: Ubuntu Linux
: NOR normal
Target Milestone: ---
Assignee: Digikam Developers
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-06-27 14:13 UTC by Martin Lubich
Modified: 2017-07-25 10:50 UTC (History)
7 users (show)

See Also:
Latest Commit:
Version Fixed In: 4.3.0
Sentry Crash Report:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Martin Lubich 2009-06-27 14:13:57 UTC
Version:            (using KDE 4.2.2)
Compiler:          gcc (Ubuntu 4.3.3-5ubuntu4) 4.3.3 
OS:                Linux
Installed from:    Ubuntu Packages

I have a collection of about 24000 images. Each time I start up digikam it takes approx. 2+ minutes to get past the initial splashscreen. 
The info displayed during the long wait is 'reading database'.
I am running ubuntu 9.04 64bit on an AMD Phenom Quadcore @ 2.4Ghz with 8GB of RAM and a Seagate ST3500320AS 500GB SATA Drive with reiserfs 3 as filesystem.

Subsequent starts of digikam are much faster, but this is only due to the fact that linux caches all the file information previously fetched. If I force the caches flushed, I always have the > 2min wait time.

I have the option 'Scan for new items at startup' deselected, so there should be no reason for any filesystem search on startup.

I did use strace to monitor the system calls, digikam performs during initialization and I discovered that digikam does a stat on each and every file in its database, thus effectively performing a scan.

I doublechecked with the option 'Scan for new items at startup' turned on, and indeed, in this case the files get stat'ed 2 twice.

This behaviour never happened with any digikam version < 0.10 and I am using digikam now from its earliest versions.

Is there anything I can do to have my startup times reduced to a bearable time ?
Comment 1 Martin Lubich 2009-06-29 21:26:42 UTC
I just changed the filesystem where my collection resides from reiserfs 3 to ext3 and had an enormous increase in startup speed. It now takes just ~3 sec to pass the 'Reading database' stage.

So this seems to be an issue related to reiserfs filesystems, allthough the stat system calls still are in place. ext3 just seems to handle them better. Might be interesting how other fs behave ...
Comment 2 caulier.gilles 2009-06-29 22:20:44 UTC
Martin,

Where is stored you database file ? In Ext3 or reiserfs ?

Gilles Caulier
Comment 3 Martin Lubich 2009-06-30 08:47:53 UTC
The database is stored in the same volume as my main photo collection, so it was on reiser and is now on ext3.
Comment 4 caulier.gilles 2009-06-30 08:58:38 UTC
Very interesting.

What's happen if Database file is back on reiserfs, image collection hosted by ext3 ? It still faster ?

Gilles Caulier
Comment 5 Martin Lubich 2009-06-30 09:11:07 UTC
I'll check it out this evening.
Comment 6 Andi Clemens 2009-06-30 09:22:45 UTC
For me reiserfs was always the slowest FS I ever tested, especially when it gets older. But this doesn't help you now :D

If your reiserfs Partition isn't that big, maybe you can backup its data to an external drive, remove and re-create the reiserfs partition and copy the data back?
This would defrag your system. I do this sometimes for my /var partition, the only one in my system with reiserfs on it. After that, I mostly have a speedup of 2-4.
Comment 7 Martin Lubich 2009-06-30 09:33:20 UTC
The reiserfs partition I was using was indeed very old ~3-4 years. As I was planning to move to ext3/4 anyways, I have now moved my whole collection to the new fs, just as you suggested. That is where I noticed this massive speedup.

I just think its maybe not a good idea to rely on the type or state of a fs to get decent startup times in digikam for huge collections. 
The typical use for these sort of collections is that they tend to sit very long on its fs and gets bigger all the time.
Comment 8 Andi Clemens 2009-06-30 09:52:04 UTC
Sure, but even with 24000 images and more I have not such speed issues like you have. Not even with reiserfs.
But note: I only tested different filesystems for a short time, not for over 4 years. My main FS was ext3 and now is ext4.
If your reiserfs partition got so slow, it is due to fragmentation, which happens a lot on reiser3.

What I want to say is that digiKam doesn't rely on the type of state of a filesystem, but it relies ON the filesystem.

If your FS is slow and fragmented, any program will become slower, not only digiKam. What should we do about this?

We use simply the Qt and KDE architecture provided to us, and that's it. We don't do FS specific things that slow down reiser but not the other ones :)

Reiser was never optimum for something like an image or music collection, at least for me. It always was the slowest choice. If reiser gets fragmented, it's even worse.

There is a besetting rumor that Linux FS don't fragment. This is just not true, but everybody will say it is so.
Ext4 is the first FS that says it can fragment and delivers an online defragmenter, to avoid this. Now everybody goes like: "Uuuh, ext4 sucks, because it is fragmenting!"

There is no FS in the world that does NOT fragment, but some do more, some do less. And windows does a lot and real quick ;-)

This is why ext3, reiser and all the FS that don't provide defragmenting techniques, become slower over the time. I've seen this for many years, it just takes longer than with FAT32 or NTFS.

What I wanted to say here is that digiKam can't do anything about this. The new version is starting up slower, that is true, but for me, all KDE4 apps do so.

As you mentioned above, there seems to be a whole collection scan going on. I can confirm this, and I guess it is KDirWatch who is doing this. For me (50.000 images), it is not noticeable, but maybe on a heavily fragmented FS, it is.

Marcel,
is it possible that we still scan the collection, even though it is disabled?

Andi
Comment 9 Martin Lubich 2009-06-30 10:12:29 UTC
Being a long time linux user and developer I am well aware of the misconception regarding fragmentation in linux fs, but thanks anyway for the sum up :)

I suppose then, that the checked infrastructure kde4 now provides has a lot to do with this behaviour. Note that digikam 0.9.x did not show this problem and I used it with the same collection which gave me the slow startup now.

Regarding the actually scan, digikam does respect the 'do not scan at startup' option. With this option enabled it just scans it twice. Its quite easy to see with a strace'd run. 

So this may really be a problem with the kde4 KDirWatch doing a stat, where the kde3 did not ( very oversimplified put :) )
Comment 10 Martin Lubich 2009-06-30 22:51:20 UTC
I tested now with the collection residing on ext3 and the database on a reiser fs. The fast startup time is still there.
Comment 11 Andi Clemens 2009-07-01 14:19:35 UTC
When using strace -c, you get some kind of "call graph". Sure it is not that accurate then using callgrind or gprof, but at least it give some useful information:

% time     seconds  usecs/call     calls    errors syscall                                                                                                                                                                                  
------ ----------- ----------- --------- --------- ----------------                                                                                                                                                                         
 38.71  452.126188       10665     42393       508 stat64                                                                                                                                                                                   
 15.53  181.369525       10659     17016      9687 read                                                                                                                                                                                     
 14.35  167.568354        7106     23580     17458 access                                                                                                                                                                                   
  5.01   58.565473       10666      5491           clock_gettime                                                                                                                                                                            
  4.08   47.689382       10647      4479           select                                                                                                                                                                                   
  2.51   29.307504       10665      2748       241 lstat64                                                                                                                                                                                  
  2.41   28.108221       10603      2651           writev                                                                                                                                                                                   
  2.24   26.171927       10665      2454           gettimeofday                                                                                                                                                                             
  1.85   21.575460       10665      2023       399 open                                                                                                                                                                                     
  1.61   18.791730       10665      1762           _llseek                                                                                                                                                                                  
  1.57   18.293895       10667      1715           time                                                                                                                                                                                     
  1.50   17.525235       10545      1662           close                                                                                                                                                                                    
  1.14   13.309920       10665      1248           fstat64                                                                                                                                                                                  
  1.06   12.403395       10665      1163           mmap2                                                                                                                                                                                    
  0.99   11.562332       10666      1084           getdents                                                                                                                                                                                 
  0.81    9.481194       10665       889           fcntl64                                                                                                                                                                                  
  0.56    6.531191       10483       623           write                                                                                                                                                                                    
  0.49    5.769765       10665       541           statfs                                                                                                                                                                                   
  0.48    5.641834       10665       529           inotify_add_watch                                                                                                                                                                        
  0.48    5.641785       10665       529           inotify_rm_watch                                                                                                                                                                         
  0.45    5.303356       10481       506           munmap                                                                                                                                                                                   
  0.38    4.490636       10769       417           poll                                                                                                                                                                                     
  0.35    4.126428       10089       409        54 futex                                                                                                                                                                                    
  0.30    3.512119       10675       329           ioctl                                                                                                                                                                                    
  0.17    2.026389       10665       190        15 unlink                                                                                                                                                                                   
  0.17    1.979691       10701       185           brk                                                                                                                                                                                      
  0.10    1.141155       10665       107           mprotect                                                                                                                                                                                 
  0.09    1.098495       10665       103           getcwd                                                                                                                                                                                   
  0.08    0.938520       10665        88           uname                                                                                                                                                                                    
  0.07    0.778545       10665        73           fchmod                                                                                                                                                                                   
  0.07    0.767920       10666        72           link                                                                                                                                                                                     
  0.07    0.767880       10665        72           fstatfs
  0.03    0.383940       10665        36           clone
  0.03    0.351945       10665        33           getuid32
  0.02    0.255960       10665        24           socket
  0.02    0.255960       10665        24           shmctl
  0.01    0.170640       10665        16           getpeername
  0.01    0.159975       10665        15           bind
  0.01    0.159975       10665        15           listen
  0.01    0.149310       10665        14           accept
  0.01    0.149310       10665        14           getsockname
  0.01    0.149310       10665        14           getsockopt
  0.01    0.095985       10665         9         4 connect
  0.01    0.095985       10665         9           semop
  0.01    0.095985       10665         9           semctl
  0.01    0.085320       10665         8           readlink
  0.01    0.063990       10665         6           mlock
  0.01    0.063990       10665         6           geteuid32
  0.01    0.063990       10665         6           madvise
  0.01    0.063990       10665         6           getdents64
  0.01    0.063990       10665         6           shmat
  0.01    0.063990       10665         6           shmdt
  0.01    0.063990       10665         6           shmget
  0.00    0.053325       10665         5           sched_get_priority_min
  0.00    0.053325       10665         5           getgid32
  0.00    0.042660       10665         4           kill
  0.00    0.042660       10665         4           fdatasync
  0.00    0.042660       10665         4           sched_get_priority_max
  0.00    0.031995       10665         3           getegid32
  0.00    0.031995       10665         3           clock_getres
  0.00    0.031995       10665         3           semget
  0.00    0.021330       10665         2           umask
  0.00    0.021330       10665         2           rt_sigprocmask
  0.00    0.010665       10665         1           execve
  0.00    0.010665       10665         1           chmod
  0.00    0.010665       10665         1           lseek
  0.00    0.010665       10665         1           rename
  0.00    0.010665       10665         1         1 mkdir
  0.00    0.010665       10665         1           pipe
  0.00    0.010665       10665         1           sched_getparam
  0.00    0.010665       10665         1           sched_getscheduler
  0.00    0.010665       10665         1           getrlimit
  0.00    0.010665       10665         1           fchown32
  0.00    0.010665       10665         1           set_thread_area
  0.00    0.010665       10665         1           set_tid_address
  0.00    0.010665       10665         1           inotify_init
  0.00    0.010665       10665         1           set_robust_list
  0.00    0.010665       10665         1           SYS_331
  0.01    0.135977        9713        14           rt_sigaction
  0.00    0.029327        7332         4           getpid
------ ----------- ----------- --------- --------- ----------------
100.00 1168.071528                117481     28367 total


I started digiKam and immediately closed it again. As you can see, stat64 took nearly 40%  of all the executed calls.
This would explain why the KDE4 version, although scanning is disabled, is much slower then the KDE3 version (as Martin already mentioned).
KDE4 seems to do a lot more then KDE3 did here. I never really worked with the KDE3 code of digiKam so I don't know which techniques we used there.
Also I'd like to mention that loading the plugins is slower then in KDE3, too.
Maybe we could load them parallel? We only have 20 plugins, wouldn't it be possible to load them all at once by using threads? They don't depend on each other, so we actually don't need to wait before another plugin can be loaded.
Or wouldn't this be possible?

Andi
Comment 12 caulier.gilles 2009-07-01 14:28:21 UTC
Here, KDE4 loading plugin time is the same than KDE3. You can only see a difference if you compile with all debug symbol. In this case it's slower.

Plugin interface is deleguate to KDE api. No way to use thread here.

Scanning collection is really the problem. What's "stat64" exactly ?

Gilles Caulier
Comment 13 Andi Clemens 2009-07-01 16:13:52 UTC
Hmm I just fired up my virtual machine with OpenSUSE 11.1 (which still has digiKam 0.9.4 in the repos) and run strace -c on it.
I get (with the same collection) nearly the same result, still it starts a lot faster.
Maybe it is the database that takes so long now?
It has become bigger in KDE4 (even without the thumbsDB, fingerprints is really huge).
I will backup my db now and reset the fingerprints table.
It should be nearly the same size as the KDE3 version. Maybe it will start faster.
Comment 14 Andi Clemens 2009-07-01 16:31:23 UTC
Without fingerprints data and without thumbsDB it is still slow, so it might not be an database loading issue.
I guess we can't do anything about that if it is KDE related?
Comment 15 caulier.gilles 2009-07-01 16:39:36 UTC
And in this case what are strace results ? still stat64 on the top ?

What's "stat64" exactly ?

Gilles
Comment 16 caulier.gilles 2009-07-01 16:46:21 UTC
ok. look there :

http://linux.die.net/man/2/stat64

If this function is called from QFileInfo or KDE api, we can do anything.

If this function is called from Exiv2, this can optimized... 

I remember some fopen used in DImgLoader to check if file exist. I remember too that calls are redondant between DImg::load() and each DImgLoader::load(). Code must be factored to optimize.

But why this function take a while ? This is certainly relevant of file system used. Right ?

Gilles
Comment 17 Andi Clemens 2009-07-01 16:55:17 UTC
Yes, this might be filesystem relevant. On reiserfs for me digiKam startup is much slower then ext3/4. My collection I check here at the moment has 35.687 items in it.

I just looked at the strace output without fingerprints and thumbsDB, it is the same.
We stat every single item in the database, although "scan on startup" is turned off.
When digiKam is actually shown, we get another load of stat64 calls, this is because we fill the album thumbnails and the iconview.
Comment 18 Marcel Wiesweg 2009-07-01 18:49:43 UTC
callgrind won't give us information about time spent with filesystem access, but it tells us from where stat() is called. Results of a digikam startup (without collection scanning) and shutdown:

44754 calls of __xstat in libc.so

of which
6547 from KStandardDirs
33288 from QFSFileEnginePrivate, called from QFileInfoPrivate::getFileFlags

Now switching to the "All callers" view for getFileFlags:
94% of all costs spent in getFileFlags is spent when called from KDirWatch::addDir; indeed, KDirWatchPrivate::addEntry calls QDir::entryInfoList.

It seems adding a directory to KDirWatch with the WatchSubDirs flag causes a stat for all contained files.
Comment 19 caulier.gilles 2009-07-01 19:06:39 UTC
I hope that KDirWatch instance is initialized after startup scanning stage...

Gilles
Comment 20 caulier.gilles 2009-07-01 20:44:36 UTC
But your investigation is done without scanning enabled. Right ?

If yes, KDirWatch is not the problem here. startup is really faster in this case.

Gilles
Comment 21 Johannes Tögel 2009-10-16 23:39:43 UTC
I also have a pretty big collection of photos (30000+), after running the fingerprint scan for all of them digikam startup is horribly slow, it takes ridiculous amounts of RAM (nearly 2 Gigabyte) and, worst of all, it doesn't show the image previews any more but says the image cannot be previewed.

I'm running sidux with KDE 4.3.1 and digikam 1.0.0-beta4, on a PC with an AMD Athlon 64 X2 2,5 GHz, 4 gigs of RAM and ext3 File System.


An immediate workaround IMO worth implementing would be to add an option for deleting fingerprints from the database.
Comment 22 caulier.gilles 2009-10-17 00:21:06 UTC
I suspect that startup is slow because fingerprint data are merged with the rest of datase where other informations relevant of images are there. 

Proposal : separate fingerprint data in a new DB file.

Gilles Caulier
Comment 23 Andi Clemens 2009-10-17 00:35:12 UTC
For me digiKam got slower because of the thumbsDB, but I wrote this in here already :D

Johannes,
do have run a "find duplicates" search?
This would explain the slow startup, if the search is removed (run it on a small folder will a high treshhold), it should be fast again.
Fingerprints are not loaded into RAM, so no need to have such huge mem consumption.

So actually the fingerprints data shouldn't be responsible for this, but the search.

Try to remove it as described above and see if it is getting better.

Andi
Comment 24 Andi Clemens 2009-10-17 00:36:58 UTC
So again to sum it up: I also have a huge test collection here (50.000 images), no slow startup, only when I have a (fairly big) saved duplicates search.
Comment 25 Edmon 2010-09-18 22:51:25 UTC
(In reply to comment #24)
> So again to sum it up: I also have a huge test collection here (50.000 images),
> no slow startup, only when I have a (fairly big) saved duplicates search.

I have 35.000 images in a collection mounted via the network (samba) and digikam takes a few minutes to start.
Comment 26 Johannes Tögel 2010-09-19 18:55:28 UTC
@Andi:
Thank you very much, the duplicate search was the problem. Digikam still takes long for starting up, but ~1 Minute is acceptable.

@Edmon: I don't know your specific setup, but I think that the bigger latencies and lower transfer rate one usually gets over the network could be the problem.
Comment 27 Edmon 2010-09-19 20:30:43 UTC
Thanks for your answer Johannes. I still have a question:
Because I disabled the search for new elements at startup, I would not expect digikam to do any scan at startup. But strace -c gives something similair to comment #11:
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 40.61 1085.971160       16871     64369      2861 stat64
 18.87  504.701133        8906     56672     41793 access
  8.78  234.882550       15863     14807      8507 read
  4.59  122.687562       16710      7342       767 lstat64
  4.55  121.577073       17274      7038           clock_gettime
  4.35  116.254236       15736      7388           poll
  2.96   79.104965       17974      4401      1960 open
  2.54   67.821400       15736      4310           writev
  2.07   55.338870       16176      3421           fstat64
  1.63   43.579692       17404      2504           close
  1.58   42.294013       15572      2716           gettimeofday
  1.02   27.192223       19354      1405           getdents
  0.95   25.522614       16267      1569           fcntl64
  0.89   23.930248       18710      1279           time
  0.75   19.972555       18391      1086           mmap2
  0.55   14.635050       18502       791           inotify_add_watch
  0.46   12.408250       19208       646           statfs
  0.41   10.844816       15274       710        42 inotify_rm_watch
  0.34    9.191334       17778       517           write
  0.29    7.739680       18605       416           mprotect
  0.26    6.918042       18063       383           _llseek
  0.20    5.441880       17113       318        37 futex
  0.21    5.505908       17876       308           brk
  0.20    5.225893       17836       293           ioctl
  0.16    4.249862       17857       238        16 unlink
  0.15    3.996653       18167       220           munmap
  0.09    2.312125       18497       125           uname
  0.07    1.996904       19201       104           getcwd
  0.07    1.785698       18221        98           fchmod
  0.06    1.724894       18350        94           link
  0.05    1.281683       15442        83           select
  0.01    0.384020       19201        20         4 connect
  0.01    0.307216       19201        16           shmctl
  0.01    0.211211       19201        11           readlink
  0.01    0.153608       19201         8           pipe
  0.01    0.153608       19201         8           geteuid32
  0.01    0.153608       19201         8           setsockopt
  0.01    0.134407       19201         7           getgid32
  0.00    0.115206       19201         6           semop
  0.00    0.115206       19201         6           semctl
  0.00    0.096005       19201         5           sched_get_priority_min
  0.00    0.076804       19201         4           fdatasync
  0.00    0.076804       19201         4         2 mlock
  0.00    0.076804       19201         4           sched_get_priority_max
  0.00    0.076804       19201         4           ftruncate64
  0.00    0.076804       19201         4           getegid32
  0.00    0.076804       19201         4           clock_getres
  0.00    0.076804       19201         4           shmat
  0.00    0.076804       19201         4           shmdt
  0.00    0.076804       19201         4           shmget
  0.00    0.057603       19201         3           pipe2
  0.00    0.057603       19201         3         3 send
  0.00    0.038402       19201         2         2 mkdir
  0.00    0.038402       19201         2           umask
  0.00    0.038402       19201         2           rt_sigprocmask
  0.00    0.038402       19201         2           semget
  0.00    0.019201       19201         1           execve
  0.00    0.019201       19201         1           utime
  0.00    0.019201       19201         1           fstatfs
  0.00    0.019201       19201         1           sched_getparam
  0.00    0.019201       19201         1           sched_getscheduler
  0.00    0.019201       19201         1           getrlimit
  0.00    0.019201       19201         1           getresuid32
  0.00    0.019201       19201         1           getresgid32
  0.00    0.019201       19201         1           sched_getaffinity
  0.00    0.019201       19201         1           set_thread_area
  0.00    0.019201       19201         1           set_tid_address
  0.00    0.019201       19201         1           statfs64
  0.00    0.019201       19201         1           inotify_init
  0.00    0.019201       19201         1           set_robust_list
  0.03    0.854447       18180        47           getuid32
  0.02    0.632035       18058        35           socket
  0.02    0.625633       18959        33           clone
  0.01    0.374422       17019        22           getsockname
  0.01    0.337706       18761        18           getdents64
  0.01    0.336020       16801        20           getpeername
  0.01    0.321616       17868        18           rt_sigaction
  0.01    0.276015       18401        15           kill
  0.01    0.248015       16534        15           bind
  0.01    0.248015       16534        15           listen
  0.01    0.220814       15772        14           accept
  0.01    0.220814       15772        14           getsockopt
  0.01    0.136807       15201         9           madvise
  0.00    0.049603       16534         3           rename
  0.00    0.040802       10201         4           getpid
  0.00    0.030402       15201         2           fchown32
  0.00    0.015201       15201         1           lseek
------ ----------- ----------- --------- --------- ----------------
100.00 2674.110292                186095     55994 total

Why digikam would call 64369 times stat64 when I disabled the search for new elements? (and I don't think I have a fingerprint database to look for duplicates. How can I make sure of that ?)
Comment 28 Johannes Tögel 2010-09-19 22:57:21 UTC
Edmon,
I am not a digikam developer and don't know very much about Linux/UNIX on a technical level, but I think that digikam just scans all existing images to find out if they got deleted or changed since the last time digikam was started. That would only explain about 35000 of the 64369 calls, but it sounds quite logical to me.

Regarding the fingerprint database, on the left side of the digikam window there is a vertical tab labeled "Search for duplicates" or similar, if you didn't use this in the past it's highly unlikely to cause problems, because the fingerprint generation has to be started manually. 

I had the problem that I closed digikam with an open duplicate search, which caused digikam to load the fingerprints to the RAM every time at startup.
Comment 29 Johan Swanepoel 2010-10-03 02:17:41 UTC
Hi Guys

Firstly let me say I really like Digikam.

I have been experiencing the startup problem for a while now, so hopefully this information will help. 

Digikam takes about 5-10 minutes to start-up, with scanning for new images disabled. I had a quick startup once after running cleanup_digikamdb, but it is slow again after that.

OS: Kubuntu 10.04
File System: EXT 4 (local disk, images and DB in same folder)
Images: 34466 (find . -type f | wc -l)
Average Image Size: 8-15 MP
Album size: 123GB

Startup output:
digikam 
QSqlDatabasePrivate::removeDatabase: connection 'ConnectionTest' is still in use, all queries will cease to work.
Time elapsed: 218 ms
Model: Time elapsed: 440 ms
TextureColorizer: Time elapsed: 125 ms
Time elapsed: 2 ms
Model: Time elapsed: 26 ms
QInotifyFileSystemWatcherEngine::addPaths: inotify_add_watch failed: No such file or directory
QFileSystemWatcher: failed to add paths: /home/johan/.config/ibus/bus
Bus::open: Can not get ibus-daemon's address. 
IBusInputContext::createInputContext: no connection to ibus-daemon
Comment 30 caulier.gilles 2012-06-22 08:44:51 UTC
About sqlite and Ext4 performance issue, i found this instructive page :

http://neuntoeter.wordpress.com/2012/06/21/io-probleme-mit-digikam/

Someone which use Ext4 FS can confirm that perfoemance are improved with this solution ?

Gilles Caulier
Comment 31 Andi Clemens 2012-06-22 09:25:57 UTC
Yes it seems to be a little bit faster... not as fast as turning barriers off though...
Comment 32 Andi Clemens 2012-06-22 09:32:21 UTC
Keep in mind that WAL journaling should not be used with network filesystems!!
See http://www.sqlite.org/draft/wal.html for more details.
Comment 33 Marcel Wiesweg 2012-06-23 17:23:17 UTC
I would expect performance with WAL to be much better in situations where digikam writes to the database in multiple commits: Scanning new images, creating thumbnails, working with the application in parallel.
Comment 34 Axel Krebs 2012-06-30 06:13:03 UTC
Hi there,

I was updating digiKam yesterday to V. 2.6. Nice impression, congratulations!

Starting procedure is extremely slow- similar to previous v. 2.5.

Some data from "help >> statistics of database":

digiKam version 2.6.0
Bilder: 
BMP: 1
GIF: 100
JP2: 16
JPG: 115990
PGF: 2
PNG: 1444
RAW-CR2: 632
RAW-CRW: 15778
RAW-DNG: 7
RAW-NEF: 70008
TIFF: 1655
XCF: 1
Gesamt: 205634
: 
Videos: 
AVI: 73
MOV: 16
MPEG: 2
Gesamt: 91

Gesamtzahl der Einträge: 205725
Alben: 2638
Stichwörter: 92
Datenbanktreiber: QSQLITE

Starting digiKam at..,.

Time	          Durance	    Memory
19:11:24		   00:00:00
19:18:05	    00:06:41	   2681292
19:28:00	    00:16:36	   3027132
19:33:35	    00:22:11	   3660744

So, after some 23(!!) minutes, digikam starts up finally.

Astonishing observation for me: it looks like only _one_ of four cpus were engaged- value constantly nearby to 25%. Somestimes, CPUs changed.

File sizes: 
thumbnails-digikam.db: 4,7 GB
digikam4.db:                  350,4 MiB

Hope this indicates you potential for improvements.


Axel
Comment 35 caulier.gilles 2014-08-05 13:22:57 UTC
Martin,

Following the discussion here, what's news about this file ? Do you have found a solution to your database storage place and env ?

Gilles Caulier
Comment 36 Martin Lubich 2014-08-06 07:00:44 UTC
Gilles

There has been a lot of development and progression  both in digikam and in my photo organization :)

I have no real issues anymore with startup speed. Currently my images reside on a samba mount ( over a 1Gbit network connection) with my database locally on an ext4 filesystem.

The number of images have grown now to roughly 80000. The scan, digikam does immediately after it started, takes about 6 seconds, which is in my opinion quite impressive for the configuration I am using.

So, no problems regarding startup scanning time :)
Comment 37 caulier.gilles 2014-08-06 08:50:17 UTC
Thanks for this feedback

Gilles Caulier
Comment 38 Paul Sommer 2014-10-02 03:01:29 UTC
Martin,
How did you manage to get that improvment in starting speedup?
I am using digikam 4.3 now with 40.000 images on a nfs share and a local database on a ssd drive and it still takes about 25 seconds to collect the images. During that time Digikam reads AND WRITES 500 kb/sec to the network (why does it write at all with a local database??)

Even worse: directly after the start of digikam the current album is scanned again. This again takes 3 to 15 seconds depending on the album size and if Digikam had crashed (which happens every couple of minutes since I switched to 64bit Kubuntu).

Why has the "scan for new images on startup" switch been removed? 
It's so frustrating to see the "anti usability dogma" spreading all over KDE since a couple of years :(

So, for me this issue is not fixed at all.

If I can provide any further infos on that matter, please let me know.