I recently wanted to migrate my digikam database from MySQL to SQLite. However, the database migration is extremely slow. top(1) reported that the digikam process consumed more than 1700 minutes for the migration. At 90,000 pictures, that is more than one second per picture! The resulting database is 150MB large, so this is a throughput of 1.5kB/s. This is on a two-core Intel Celeron CPU 847 @ 1.10GHz with 4GB of RAM and a local MySQL database. Clearly this could be optimized a lot. A mysqldump of the original database finishes in less than 8 seconds! Peeking at the process with a debugger, it seems that most of the time is spent in the Qt event loop. Maybe the progress bar is repainted for every row in the database? Reproducible: Always
This file still valid using last digiKam 5.0.0 ? Gilles Caulier
Sorry, at the moment I don't have time to update to digiKam 5.0.0 and set up appropriate tests (still on digiKam 4.9.0).
And with 4.9.0, the problem is reproducible ? Gilles Caulier
Can you reproduce the problem using digiKam Linux AppImage bundle ? The last bundle is available at this url: https://drive.google.com/drive/folders/0BzeiVr-byqt5Y0tIRWVWelRJenM Gilles Caulier
With next 5.8.0 release Mysql support have been well improved and a lots of bugs fixed. Please test with pre release 5.8.0 bundles that we provide and give us a feedback https://files.kde.org/digikam/ Thanks in advance Gilles Caulier
I'm confirming, that in DigiKam 5.8.0 - migration from MySQL to SQLite is very slow. To speed up, I have created RAM disk to store SQLite database, but still, it takes ages. I notice, that during migration .jurnal file is created and deleted over and over again. That's why I think it might be related to single insert transaction commits. According to http://www.sqlite.org/faq.html#q19 ----------- Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite.. By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced. ---------- Thus probably it would be a good idea to wrap inserts by some transaction. Moreover, we could try to do some tweaks to improve performance. We can ask SQLite not to delete and recreate file over and over again for each insert, by using PRAGMA JOURNAL_MODE. With value eg. 'TRUNCATE', we can keep journal file, but it will be cleared. If we can accept loose of imported data on eg power failure, we can use some pragmas to speed up. See: https://blog.devart.com/increasing-sqlite-performance.html See: https://stackoverflow.com/questions/1711631/improve-insert-per-second-performance-of-sqlite
digiKam 7.0.0 stable release is now published: https://www.digikam.org/news/2020-07-19-7.0.0_release_announcement/ We need a fresh feedback on this file using this version. Best regards Gilles Caulier
Hi Thomas and happy new year, Can you reproduce the problem with digiKam 7.5.0 pre-release AppImage bundle for Linux available here : https://files.kde.org/digikam/ Best regards
Hi all, digiKam 8.0.0 is out. Problem still reproducible ? Best regards Gilles Caulier
Hi, yes i can reproduce it with digiKam-8.0.0-x86-64.appimage.
Thomas, What's about this file using current 8.2.0 AppImage Linux bundle ? It's reproducible ? https://files.kde.org/digikam/ Note: bundle is now based on Qt 5.15.11 and KDE framework 5.110. Thanks in advance Gilles Caulier