Version: (using KDE KDE 3.5.2) Installed from: Debian testing/unstable Packages While I realize that digikam is currently hard coded to use sqlite it would be useful for me (and I suspect others) to also be able to use other database systems such as postgres and mysql. My primary reason for wanting digikam to be able to use other RDMS (in particular postgres) is so that I can develop a web front end for the digikam database. I would release any front end under a free license (probably an Apache style licence). Java drivers for sqlite are currently difficult to get working (especially on 64bit) and I believe unsupported at present.
Thinking about htis from the opposite direction, a method of achieving what you want may be possible by making the Gallery2 Kipi Plugin integration better. I have helped to make this plugin Gallery2 compatible (as commited to SVN today/yesterday), but want to make the Digikam integration much better. While the website wouldn't be "live", it may be enough of what you are looking for. I've not started development on this feature yet, so nothing to show at the moment.
Since we are using QtSql now, isn't this (already) possible? Or do we still have hard-coded sqlite code in digiKam? Andi
I remember to talk with Amarok team and Marcel in the pass about this subject. The problem is to have pure SQL request compatible with all database backend. In theory it's true, but this must been tested indeep. And you can imagine that it will require a lots of time. Perhaps a tool can be written to to test all SQL queries used in digiKam. Of course this require a different way to register all queries in database interface, for ex to use a list SQL queries recorded in an XML file. Gilles Caulier
It seems every SQL implementation has its own peculiarities and missing features. QtSql will handle the low-level stuff (like accessing the libraries for a database) but we have to write the SQL. I have never worked with MySql, so for this task it would be best to have someone with experience with MySql, but we do not have this someone.
I have another good reason to ask for other backends: multi-user support. I would like to setup digikam for multiple users (on several PC's) that all access a central photo repository on a server. In a very limited way, this is already possible but sharing tags, comments and other metadata is quite troublesome. Using a central database shared by all of these users would go a long way is solving this issue. By the way, I propose altering the version to 0.10.x (or at least 0.9.x). I don't think this feature request would be implemented anymore for 0.8.x.
Thinking of multi-user support a bit further, just having a real rdbms won't suffice. The current db backend in 0.10 holds information on local as well as remote image collections. Obviously, local collections are not shared among multiple users on multiple PC's. So a more general approach could have a separate db per image collection, stored in the root album path of the collection (or in a real rdbms). This would allow digikam to access only those databases relevant for the image collections that are configured on each particular machine for each user. An example with numbers: Suppose we have a network of 2 PCs (PC A and PC B) and one server (Server). PC A has holds one local collection, LCollA (L stands for local) The same for PC B: LCollB On the server are two image collections NCollC and NCollD (N stands for network) which are exported over nfs. NCollC will be accessed both from PC A and PC B, while NCollD is only accessed from PC B. Assuming each collection has got its own database, this could be configured easily. LCollA and LCollB could continue to use SQLite as they are only accessed locally by one user. NCollC should use MySQL (or any other networked rdbms), as that would allow simultaneous access from PC A and PC B. NCollD can be configured either via SQLite or MySQL as only PC B would access it. MySQL might be preferred here, because nfs doesn't always properly implement the locking mechanism used by SQLite, which could lead to database corruption (not in this particular use case, as only one PC would access it, so there is no locking issue).
I have basically implemented a generic database access for Digikam to use a QT supported database. The background idea is to use xml file which stores the database specific queries. This makes it easy to modify existing statements, without compiling the project. * An xml file, where the database settings (for the QSQLDatabase handler) are stored and also the specific statements. * A class which parses this xml file and stores the structure at the databaseparameters class. * Database class has now a method getDBAction(QString &dbAction) where the specific DBAction with its statements can be retrieved and is used for the database schema creation. This works already. * A new query/execSQL method which accepts a QMap to execute queries with named placeholders. I have done this for the mysql database as "reference" implementation. Quite difficult for me is now the setup dialog which should do the following: - Provide Standard Setting with SQLLite file selection. - Provide Expert Settings where all database connection parameters can be adjusted. Can somebody help me with that?! Next steps: DB locking - only one Digikam-App should access the DB. Cheers :)
Quallenauge, very interesting. Can you show us your code to review ? Marcel, Andi, your viewpoints on an XML storage for SQL queries ? Gilles Caulier
Hi Gilles, > Can you show us your code to review ? Yes, of course! But as first I would like to merge the code with the latest SVN revision, I hope that this would be done within the next week. Expect some rough edges, e.g. config file is read from hardcoded path, maybe some variable names are not in the right convention... :-)
Loading SQL from an XML file sounds like a good solution to me. It allows to decouple the SQL queries from C++ code, and, as I understand, no need to format SQL in C++. Great so far. My questions and comments: 1) I would recommend you to work in an SVN branch, unless you are using local Git already. A pity KDE has not yet moved to Git. Anyway, working in the open is better for complex changes. 2) Text handling: SQLite has only one type for text with no imposed length limit, so we never thought about this For MySQL we need to choose an appropriate VARCHAR or TEXT value for the expected and allows length of each text field. We must make sure that UTF8 and appropriate collation is used everywhere. Indexes on text fields are limited in length. I dont see a problem from this currently, but we should keep it in mind. 3) Which version of MySQL can we expect as minimum dependency? 5.0.3? 4) There are IIRC five source files where we use SQL, this is albumdb.cpp (95% of statements), schemaupdater.cpp (defining the schema), haariface.cpp (a few statements), imagelister.cpp (for album listing) and imagequerybuilder (building select clauses) 5) Config dialogs are not critical, we need that of course but getting the backend to work has priority 6) If we have MySQL support, we will probably default to MySQL embedded, providing full server support only as expert setting. But as I understand this is only a matter of linking against a different library and providing the right parameters. 7) "DB locking - only one Digikam-App should access the DB" In fact we intend a usage scenario where multiple applications access the same DB concurrently. Currently you can open two digikam instances on the same db with no problems. Change messages are currently distributed over DBus, for network scenarios this must be done over a small network protocol (no priority for now). Table locking issues that come with real concurrent access require some separate thinking. 8) Transactions: We use transactions in a few places, but only as an SQLite specific optimization, not because the operation need be atomic. The only place where a transaction shall really ensure atomicity is the schema updating. Thanks for your work so far!
(In reply to comment #10) > 8) Transactions: We use transactions in a few places, but only as an SQLite > specific optimization, not because the operation need be atomic. The only place > where a transaction shall really ensure atomicity is the schema updating. Transactions could have been a time saver for SQLite, if used more frequently. But still I gues moving away from SQLite is the only good solution here. An abstraction with XML could really help integrating more db backends.
Created attachment 35461 [details] First proposal for a generic database access. Okay, I have today synchronized with the head revision. No troubles - good :) 1) I would recommend you to work in an SVN branch, unless you are using local >Git already. A pity KDE has not yet moved to Git. Anyway, working in the open >is better for complex changes. I don't have an SVN account for KDE, also I don't have experience how to use git :( 2) Text handling: >SQLite has only one type for text with no imposed length limit, so we never >thought about this For MySQL we need to choose an appropriate VARCHAR or TEXT >value for the expected and allows length of each text field. According the url http://www.htmlite.com/mysql003.php we can choose "LONGTEXT A string with a maximum length of 4294967295 characters" for extra long datafields. I have choosen VARCHAR(255) for the most text fields. 3) Which version of MySQL can we expect as minimum dependency? 5.0.3? >I've using the version 5.0.75. But I think your proposed version is also okay, since I >don't use something special. Maybe the statements are compatible for the 4.x.x versions. 4) There are IIRC five source files where we use SQL, this is albumdb.cpp (95% >of statements), schemaupdater.cpp (defining the schema), haariface.cpp (a few >statements), imagelister.cpp (for album listing) and imagequerybuilder >(building select clauses) I don't have touched all statements, I think this should be done step by step (everytime we make changes on them). 5) Config dialogs are not critical, we need that of course but getting the >backend to work has priority Okay. 6) If we have MySQL support, we will probably default to MySQL embedded, >providing full server support only as expert setting. But as I understand this >is only a matter of linking against a different library and providing the right >parameters. Right. 7) "DB locking - only one Digikam-App should access the DB" In fact we intend a >usage scenario where multiple applications access the same DB concurrently. >Currently you can open two digikam instances on the same db with no problems. >Change messages are currently distributed over DBus, for network scenarios this >must be done over a small network protocol (no priority for now). >Table locking issues that come with real concurrent access require some >separate thinking. I also thought about the communication between the digikam instances. I hope KDE provides a generic solution for this problem. 8) Transactions: We use transactions in a few places, but only as an SQLite >specific optimization, not because the operation need be atomic. The only place >where a transaction shall really ensure atomicity is the schema updating. If that would needed, the DBAction can modified with the "transaction" mode parameter. Okay, know I would explain the patch: The file dbconfig.xml (the lookup path is hardcoded in "void DatabaseParameters::readConfig()" method) contains all db relevant data: - The connection data is stored there (atm). - DBActions wich contains one or more DB Statements. <dbaction name="CreateDB" mode="transaction"> <statement mode="query">CREATE TABLE AlbumRoots (id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT, label VARCHAR(255), status INTEGER NOT NULL, type INTEGER NOT NULL, identifier VARCHAR(255), specificPath VARCHAR(255), UNIQUE(identifier, specificPath)); </statement> <statement mode="query">.... </statement> ... </dbaction The "mode" parameter defines the execution mode. If it is set to "transaction", a transaction is opened and if all statements are successfully executed a commit is executed. In other cases a rollback is executed. The statement element has also a "mode" parameter which defines if a "query" or a plain SQL command (for creating triggers, procedures, etc.) is executed. I think there is only one problem with predefined statements: Dynamically produced SQL queries. I suggest to use special DBActions with mode "fragment" wich contains only one statement with a specific part of a query. Eg. <dbaction name="ImageTextCondition1" mode="fragment"> <statement mode="query">UserText=:Text </statement> </dbaction And in code it would look like: select * from TextTable where m_access->backend()->getDBAction(QString("ImageTextCondition1")).getStatement() if (user has clicked imagetext){ "OR" m_access->backend()->getDBAction(QString("ImageTextCondition2")).getStatement() } There are new methods in databasecorebackend class: databaseAction getDBAction(const QString &actionName); With this method, a DB Action can be retrieved with its ID. This action is a parameter for the next method: bool execDBAction(const databaseAction &action, const QMap<QString, QVariant>* bindingMap = 0, QList<QVariant>* values = 0, QVariant *lastInsertId = 0); where the DBAction is executed. Note that this method has also a bindingMap which contains a list of named placeholders. With them you can use a statement like: select * from table where id=:ID. A entry with the key ":ID" within the map contains the concrete value and will be replaced. (The current implementation allows SQL injection because of simple replacement via regexp, but I'm working on the replacing algorithm. The QT provided methods for binding named placeholder doesn't work unfortunately :(
I've been following this bug since submitting it a while back so it's nice to see someone taking a crack at it. I regularly use a library called Hibernate to provide me with a database agnostic data store and I was wondering if there is anything similar that you could use with Digikam. Hibernate is Java based but I know there is also a .NET port (NHibernate) which is pretty mature. I've had a quick look for C++ ORM layers and none seem to be as advanced as Hibernate but Debea might be worth looking at. I've developed applications in the past where queries are held in XML files. It works fairly well but it rapidly becomes complex when the user can cause dynamic queries to be produced. An ORM layer really helps in this situation as it handles writing the DB specific SQL and all the escaping etc. It would be a big change but it might be worth it.
I also have used hibernate on java and I must agree: Building of dynamic queries is really cool with an ORM layer!!!^2 The other side, as you describes is the big change: There must be entities which represents the database with all it's releationships. (I'm not sure about my next concern:) Also you must use the same database schema for the underlying DBMS, which can cause in some cases problems. E.g. at the digikam-db there is a table tagtree which contains a adjacency list model for tag hierarchy. This table is filled via some triggers. In mysql I was not able to use this table 1:1, so I used another technique to implement this feature. I'm not sure if this is as easy with an ORM mapper. (I have had hibernate only used with one DBMS type).
> I don't have an SVN account for KDE, also I don't have experience how to use > git :( It's all right to do this with patches. Though, depending on how much work this is, it may be more convenient to get a KDE SVN account; getting an account is easy, see http://techbase.kde.org/Contribute/Get_a_SVN_Account No need for git. > According the url http://www.htmlite.com/mysql003.php we can choose > "LONGTEXT A string with a maximum length of 4294967295 characters" > for extra long datafields. > I have choosen VARCHAR(255) for the most text fields. Some will be longer (comments, copyright entries), some have a fixed length (uniqueHash), for some a short string like 255 may be all right (make, model), paths are difficult to predict (260? No. 1024? 4096? Even longer?) > > 3) Which version of MySQL can we expect as minimum dependency? 5.0.3? > >I've using the version 5.0.75. But I think your proposed version is also okay, since I > >don't use something special. Maybe the statements are compatible for the 4.x.x versions. Best choice is to be compatible with amarok here. They say 5.0. We should pick any version widely available on all modern distributions, not too recent and not too old. > 7) "DB locking - only one Digikam-App should access the DB" In fact we intend a > >usage scenario where multiple applications access the same DB concurrently. > >Currently you can open two digikam instances on the same db with no problems. > >Change messages are currently distributed over DBus, for network scenarios this > >must be done over a small network protocol (no priority for now). > >Table locking issues that come with real concurrent access require some > >separate thinking. > > I also thought about the communication between the digikam instances. > I hope KDE provides a generic solution for this problem. In AlbumDB, there are a lot of statements backend->recordChangeset(...). All these changesets are already broadcast as DBus signals to other digikam apps. They are as well exclusively used for application-internal change notification. > > 8) Transactions: We use transactions in a few places, but only as an SQLite > >specific optimization, not because the operation need be atomic. The only place > >where a transaction shall really ensure atomicity is the schema updating. > > If that would needed, the DBAction can modified with the "transaction" mode > parameter. Ok. > I think there is only one problem with predefined statements: Dynamically > produced SQL queries. > I suggest to use special DBActions with mode "fragment" wich contains only one > statement with a specific part > of a query. Eg. > <dbaction name="ImageTextCondition1" mode="fragment"> > <statement mode="query">UserText=:Text > </statement> > </dbaction > > And in code it would look like: > select * from TextTable where > > m_access->backend()->getDBAction(QString("ImageTextCondition1")).getStatement() > if (user has clicked imagetext){ > "OR" > > m_access->backend()->getDBAction(QString("ImageTextCondition2")).getStatement() > } I guess this will work. There are quite a lot of small fragments in imagequerybuilder.cpp. Some are trivial, like single brackets, and maybe only a fraction of this needs to be DBMS specific. > > > There are new methods in databasecorebackend class: > > databaseAction getDBAction(const QString &actionName); > With this method, a DB Action can be retrieved with its ID. > > This action is a parameter for the next method: > bool execDBAction(const databaseAction &action, const QMap<QString, QVariant>* > bindingMap = 0, QList<QVariant>* values = 0, QVariant *lastInsertId = 0); > > where the DBAction is executed. Note that this method has also a bindingMap > which contains a list of named placeholders. With them you > can use a statement like: > select * from table where id=:ID. A entry with the key ":ID" within the map > contains the concrete value and will be replaced. > (The current implementation allows SQL injection because of simple replacement > via regexp, but I'm working on the replacing algorithm. The QT provided methods > for binding named > placeholder doesn't work unfortunately :( Yes, if you look at the source of the Qt MySQL driver, it supports only positional placeholders, no named placeholders. Same for SQLite btw, that's why we only use positional placeholders so far. There are a few batch operations at performance critical points. No need for true batch operations here, but prepared queries should be used. There will be a specified number and order of expected bound values for each action anyway as far as I see this. So why not use positional binding? > I also have used hibernate on java and I must agree: > Building of dynamic queries is really cool with an ORM layer!!!^2 I shortly thought about this two years ago when we decided to take Qt SQL. For there there is an additional layer involved, adding complexity, where direct SQL writing comes easily.
[KDE SVN Account] Done. I would like to check in my changes in SVN. How should the branch named?! (Proposal: /home/kde/branches/extragear/graphics/digikam/0.10-extdb) [Field length] >...paths are difficult to predict (260? No. 1024? 4096? Even longer?) In this cases, I would use a reasonable limit for that. Better we allocate more than lower space. (For the LONGTEXT there is an overhead of 4bytes per entry see http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html ). [MySQL Version] >Best choice is to be compatible with amarok here. They say 5.0. We should pick >any version widely available on all modern distributions, not too recent and >not too old. I'm fine with MySQL 5.0. [Multi Instance DB Access] >In AlbumDB, there are a lot of statements backend->recordChangeset(...). All these changesets >are already broadcast as DBus signals to other digikam apps. >They are as well exclusively used for application-internal change notification. Okay, as I understand there are already some points which can be used for the future network protocol. I think this should be the second or third step of our journey to external DB access. [Named placeholders] >Yes, if you look at the source of the Qt MySQL driver, it supports only >positional placeholders, no named placeholders. Same for SQLite btw, that's why >we only use positional placeholders so far. That make things clear... >There are a few batch operations at performance critical points. No need for >true batch operations here, but prepared queries should be used. Ok, we can use the statements unmodified. >There will be a specified number and order of expected bound values for each >action anyway as far as I see this. So why not use positional binding? I have some points: 1) Code is flexible. You can change the order of parameters in statement (in case of bugfixing or optimizing a query). 2) You can add easy a additional conditions or a sub select with the used placeholders. E.g.: Before: SELECT * FROM table where ID=:ID After: SELECT * FROM table where ID=:ID AND PID=:ID 3) A DB action accept 0-n statements. When they are executed, positional binding can not be used because the placeholders are all on a different place. E.g: <dbaction name="GetTableContent" mode="transaction"> <statement mode="query">INSERT INTO table (name) VALUES (:name)</statement> <statement mode="query">SELECT FROM table WHERE ID=:ID</statement> </dbaction> 3) It doesn't add complexity, but more readable code. You have to QMap<QString, QVariant> parameters; parameters.insert(":tagPID", parentTagID); parameters.insert(":tagname", name); and give that to the sql executing method. The only side effect is a little overhead for creating the QMap object at runtime. BTW: I have now an implementation, which is SQL injection save. I would checkin this as soon if I have a branch :) [ORM] I'm fine with sql queries.
Done. I would like to check in my changes in SVN. How should the branch named?! (Proposal: /home/kde/branches/extragear/graphics/digikam/0.10-extdb) No. Please use this path : /home/kde/branches/digikam/database-ext Gilles Caulier
>[Field length] >...paths are difficult to predict (260? No. 1024? 4096? Even longer?) >In this cases, I would use a reasonable limit for that. Better we allocate more >than lower space. (For the LONGTEXT there is an overhead of 4bytes per entry >see http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html ). Strings size limitation is a mess, since we host XMP strings in DB. XMP has no size limitation. Anyway, some strings has make and model are generally short (255 char is enough) But some other as all string dedicated to IPTC-core can be long, as descriptions for photograph agencies workflow. Please take a care about. Gilles Caulier
Hi Gilles, > No. Please use this path : /home/kde/branches/digikam/database-ext Can you create this path? I have not sufficient permissions to do that.
Ah, yes, me too, there are restriction now... to weird. Ok, just copied trunk here : http://websvn.kde.org/branches/extragear/graphics/digikam/1.0-databaseext/ You can play with code now (:=))) I will try to sync this branche with trunk source code, to simplify backport to trunk in the future Gilles
> [Field length] > >...paths are difficult to predict (260? No. 1024? 4096? Even longer?) > In this cases, I would use a reasonable limit for that. Better we allocate more > than lower space. (For the LONGTEXT there is an overhead of 4bytes per entry > see http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html ). We can add info about a "minimum guaranteed field size" in our DBSCHEMA.ods schema spec. It feels good to say "no limit" but a maximum size of 4GB, in practice, is the same. > > BTW: I have now an implementation, which is SQL injection save. I would checkin > this as soon if I have a branch :) All right go on
A screenshot of MySQl config panel, still under development: http://farm3.static.flickr.com/2527/3760951023_7398e18a79_o.png Thanks to Quallenauge for this great work... Gilles Caulier
*** Bug 134594 has been marked as a duplicate of this bug. ***
*** This bug has been confirmed by popular vote. ***
*** Bug 206416 has been marked as a duplicate of this bug. ***
doing all this only to support yet another embedded database seems like a lot of hassle for very limited advantages. Why is a kde user expected to have at least three databases installed on his system? There is akonadi which uses mysql then there is amarok which (currently) uses mysqle but might connect to a real mysql server in the future and then there is digikam that has a lot of work being commited to achieve a setting that is only available "for experts as a matter of linking" Sorry but this should be easily available. I'd rather have one database so I only have to dump one to make backups. Not to talk about the overhead of having different databases running at the same time. So I kindly ask to reconsider on how this is going to be available & configurable.
The main goal (for me at least) is to connect a database (in a dbms) in my own intranet, which is mostly imeplemented and working in a developemnt branch. >[...] >and then there is digikam that has a lot of work being commited to achieve a >setting that is only available "for experts as a matter of linking" >[...] This was a comment to achieve embedded dbms support in qt applications. We go now in another direction and evaluate using an own dbms server like akonadi does. >Not to talk about the overhead of having different databases running at the >same time. Thats right, but providing an embedded/own server means better independence to other projects (e.g. maybe in future it turns out, another dbms is better for our purposes - so we can support that fast). Your are also right it should (and with akonadi and digikam it is implemented already) it is configurable to connect to only one dbms.
Updated Database config panel (SQlite and MySql) http://farm5.static.flickr.com/4049/4329227145_6ea266c9b9_o.png http://farm5.static.flickr.com/4055/4329961366_09725b8569_o.png Gilles Caulier
Created attachment 41241 [details] Database definition with foreign keys
Is there any plan to adopt foreign keys in the upcoming SQLite/MySQL database implementation? I think that adopting foreign keys would avoid many of the current triggers in the database definition, and I guess that the database would be more efficient. I have managed to add foreign keys to the current database definition (see attachment). Note that SQLite version 3.6.19+ is needed to execute these statements (see http://www.sqlite.org/foreignkeys.html). The only shortcoming I have noticed so far is that the icon column of the Albums table can not refer to the id column of the Images table, since this would generate a circular dependency between the Albums and Images tables. However, this would be easily solved by adding a new AlbumIcon table or similar. Best regards.
Holger as you has merged Mysql port branche to trunk, I think this file can be closed now. What do you think about ? Gilles Caulier
In that case, should the description not be changed and another bug files? postgresql is not yet available.
I renamed title. Francesco, do you will manage this file in the future ? Gilles Caulier
(In reply to comment #33) > I renamed title. > > Francesco, do you will manage this file in the future ? > > Gilles Caulier Not experienced with Postgres but the work I've started in sql branch point exactly to this, making use of multiple backends viable and easy, a look at users personalization will be kept open. At holidays end I'll look into this whole bug.
What is the status of this ticket? digiKam still supports only MySQL and SQLite (at least under Gentoo), even though qtsql is compiled with postgres-Support.
(In reply to Geert Janssens from comment #5) > I have another good reason to ask for other backends: multi-user support. > > I would like to setup digikam for multiple users (on several PC's) that all > access a central photo repository on a server. In a very limited way, this > is already possible but sharing tags, comments and other metadata is quite > troublesome. > Using a central database shared by all of these users would go a long way is > solving this issue. > > By the way, I propose altering the version to 0.10.x (or at least 0.9.x). I > don't think this feature request would be implemented anymore for 0.8.x. I'm working to develop a system for a client company that wanted to use Digikam for their photographers and have its workflow data incorporated into the PostgreSQL database used by their Odoo ERP system running on a cloud server. The inability to use other databases severely limits Digikam's usefulness to professional photographers. Poor database support and the inability to save a set of batch tool processing that could be called from the command line to apply to a group of files are two things that limit Digikam's usefulness to hobbyist and self employed photographers working alone. This is really a shame for a program that is so good otherwise with support for unique functions such as non-destructive edits. My client's request seemed simple. To allow photographers to move raw images from their cameras to a network folder, where a server with several CPU cores would apply a set of Digikam edits, after which graphic artists could then finish work on the images. It is really difficult to explain to my non-programmer client why Digikam doesn't support doing something this simple, given its many other features. Unfortunately I'm a Python developer rather than C++, so I can't make the changes she wants to Digikam. As result, I loosing the contract to someone she's hired to implement the system using Windows and Adobe software.
Hi, Is there hope to see PostgreSQL as a database backend? This bug has no activity since years. If you need a PostgreSQL DBA to test, I'll volunteer. Yours,
Hi Christophe and happy new year, The difficulty is to found a student to work on the topic. Maik, do you think that we can propose this topic as a Google Summer of code this year ? As the Mysql/Mariadb backend work better now, it's perhaps the time to take a look to PostgreSql... Gilles
I'm also very much interested by this implemented. I've quite no clue about the level of difficulties that can be and the programing effort it can when you start from almost zero (IDE, git, kde account ok and being able to build the software ok) Otherwise never touch C++ nor Qt :-) Spending now much time on photography activity (will it be my main activity is still a mystery ;-). I will first check what I can obtain with foreign datawrapper and with pg_loader against a mariadb or sqlite digikam db and check schema.
I had been dealing with the differences between MySQL and PostgreSQL some time ago. Yes, some SQL commands are the same, but a lot are different. A lot of SQL statements would have to be programmed twice and newly in digiKam. The effort is great and I think too big for a future GSoC-202X task. Maik
Hi Maïk, the SQL part should be for me the easy part, and I'm also in favor of having the right sql dialect (even if pg is the most closer to the norm) adapted for each engine. I have to also recheck the whole thread (like the proposal with foreign keys). Could you confirm to me that the sql code is located all at the same place in libs/database ?
Hi Bruno, This a long story, with plenty of technical difficulties. Currently we have these kind of database supported : - sqlite (local) - Mysql (local) - Mysql (remote) - Mariadb (local) - Mariadb (remote) all use the QtSql plugin to access to database. So Postgresql is possible as a plugin for this database already exists. But accessing to th database is not enough. We need to manage tables and contents, depending of database engine features. For ex, Mariadb and sqlite do not provided the same functionalities and to manage relation between tables need adjusted schema. This is done in an XML schema for the tables properties depending of the database engine. The rest of the code is factored in common Sql queries more and less portable. The first database supported was sqlite2, now 3. Later Mysql is coming following by Mariadb. But in schema and implementation, for the remote databases, it still some implementations to do and serious optimizations. Also, the database interface do not permit yet current access from different digiKam instances. So to add a new database support, we need to finalize all the features planed for the remote support, and later Postgresql can be add safety. Voilà Gilles Caulier
Hi Gilles, thanks for the whole information. To get the job done, it seems to be a bit more complicated than I was thinking, and would be a perfect job during a Randa Meeting :-( So in the meantime I will get more familiarized with the program and its data model, and check with pgloader if I encounter big troubles. Also if remote can't actually handle concurrent access (which was one of my goal and desire) I will have certainly to wait for it. Hope to see you again soon somewhere irl! Take care and all good for 7x final release.