Version: 2.4-GIT OS: Linux When I organize files, the old files are deleted, moved to the new location, and database entries are created for the new files, but the old database entries are not updated or deleted. Resulting in dupes in the collection. This is a database bug since it persists after restarting amarok. Also note how the right pane shows the tracks that I expected amarok to update its paths after organizing them. They are greyed out. It means that amarok did NOT, in fact, update the paths of the playlists referencing the organized files. WTF. Code: amarok HEAD Reproducible: Didn't try Steps to Reproduce: use the organize files feature Actual Results: dupes of the database rows created dupes of the in-memory model entries created playlists were not updated Expected Results: existing in-memory model entries to be updated playlists to be updated database rows to be updated OS: Linux (x86_64) release 3.1.4-1.fc16.x86_64 Compiler: gcc
Created attachment 66897 [details] showing effects of organize files
Additional info: update collection function deleted the invalid entries. The playlists, however, were not updated. Amarok File Tracking clearly does not work.
I assume you use 2.5-git, right? The tag has changed a few days back.
Yes, yes, good m'am.
Reporters, please try patch from https://git.reviewboard.kde.org/r/105488 is should virtually solve this bug.
*** Bug 301462 has been marked as a duplicate of this bug. ***
Git commit 82d7b2e04df4b5a6cd584d3860f71ace127dbce9 by Matěj Laitl. Committed on 10/07/2012 at 09:32. Pushed by laitl into branch 'master'. SqlScanResultProcessor: cope with non-unique uniqueid in the database Unfortunately, the uniqueid column (or rather its index) of our urls table is not defined as unique and unfortunately at least some code in SqlCollection doesn't check for duplicates before inserting to the table. This can be provoked for example by using the "Organize Collection" functionality. While fixing SqlCollectionLocation in short-term and making the uniqueid index unique in long-term is probably needed, we need to cope with existing user databases. This change is needed because SqlScanResultProcessor identified tracks fully by their uniqueid which resulted in unpredictable and incorrect behaviour - it for example never removed the "old" duplicate entry in deleteDeletedTracks( int ) and sometimes found incorrect entry when importing a track in commitTrack(). This does not solve bug 289338, but it dramatically reduces its consequences, the (correct) duplicates are removed as soon as collection scanner fires. v2: the test failure spotted by Sentynel discovered a bug in patch v1, there was an assertion that sometimes failed even for normal operation because database updates were temporarily blocked. Fix by moving the assertion to a place where it is valid in all cases. M +40 -24 src/core-impl/collections/db/sql/SqlScanResultProcessor.cpp M +12 -3 src/core-impl/collections/db/sql/SqlScanResultProcessor.h http://commits.kde.org/amarok/82d7b2e04df4b5a6cd584d3860f71ace127dbce9
Closing as there was no feedback on failures anymore.