Bug 341192 - Moving messages does not synchronize to disk
Summary: Moving messages does not synchronize to disk
Status: RESOLVED NOT A BUG
Alias: None
Product: Akonadi
Classification: Frameworks and Libraries
Component: Maildir Resource (show other bugs)
Version: GIT (master)
Platform: openSUSE Linux
: NOR critical
Target Milestone: ---
Assignee: kdepim bugs
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-11-23 10:02 UTC by Rigo Wenning
Modified: 2022-10-21 10:53 UTC (History)
8 users (show)

See Also:
Latest Commit:
Version Fixed In:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Rigo Wenning 2014-11-23 10:02:50 UTC
I have a large imap folder, too large for imap. So I expired all messages older than 120 days to a folder on my local disk (about 4500). In kmail, those messages were moved. Akonadiconsole shows all messages in the new local folder. I can see and access the messages. But if I go to .local/share/.local-mail-directory/.archive/new-folder/ and do a ls -lR, there are no messages stored on disk. Akonadi simply does not synchronize the moved messages to disk. But it is supposed to. It also seems to be a matter of scale. Because later expired messages -expired one by one- arrive physically on the disk in the new location. How can I force Akonadi to write down its cache to disk?

Reproducible: Sometimes

Steps to Reproduce:
1. Move more than 1000 messages from an imap resource to a local folder
2. 
3.

Actual Results:  
Akonadi shows messages, but the messages are never written to disk

Expected Results:  
Akonadi is a cache. It should write those messages to the disk at the expected location.

I had that already in the past. This situation will at some point in time turn into akonadi realising that there are no messages on the disk, synchronise with the disk and thus erasing all messages that were moved into the local folder. So the message loss hasn't occurred yet, but will occur soon.
Comment 1 David Faure 2014-12-02 11:10:58 UTC
sounds more like a bug in the maildir resource, no?
Comment 2 Martin Steigerwald 2015-03-12 11:20:23 UTC
Rigo, as you said, Akonadi is a cache. I thought only a read cache, but on some recent discussion on debian-kde Kevin Krammer told me it may also cache writes for some amount of times – and it would need to in case of a IMAP resource being unavailable due to interim network issues[1].

Could it be that the mails are in ~/.local/share/akonadi/file_db_data? – Akonadi uses it as a temporary buffer I think also in the case of moving mails. So it may be the mails are there and Akonadi will eventually move them to the final destination after some time. So if KMail and Akonadiconsole still see the mails I bet they are still there on disk. Just not where you expect them to be.

And yes, I find this behavior highly confusing expecially as I have seen that Akonadi cached mails in there for long amounts of time. I´d expect it to behave more like a filesystem write journal, i.e. flush to destination storage as soon as possible.

 [1] Re: Possible akonadi problem?
https://lists.debian.org/debian-kde/2015/02/msg00009.html
Comment 3 Rigo Wenning 2015-03-12 12:25:18 UTC
I have a file_db_data directory with 57450 files. So it hasn't synchronized locally since ages IMHO. If it synchronizes now, it will probably create a mess. BTW, I fully support your thoughts in [1]. Note I'm using opensuse binaries (currently 4.14.5)
Comment 4 Martin Steigerwald 2015-03-12 13:06:29 UTC
Rigo, I suggest you review

[kdepim-users] Work-around to issues with Akonadi file based caching (was: Re: rant)
http://lists.kde.org/?l=kdepim-users&m=142382010023511&w=2

regarding open issues to Akonadi´s file based caching.

It has some hints on how to remedy the issue at least partly.

And no, 57000 mails in there aren´t that much. I have seen up to 850000 files in there.

I see that the SizeThreshold work-around I mentioned works quite well for me and post another mail to the thread and probably some of the related bug reports in a moment.
Comment 5 Rigo Wenning 2015-03-12 16:35:39 UTC
did everything suggested (size indication in config, fsck and vacuum). I did move around 1200 messages from the imap account to a local folder before doing akonadictl fsck. I have currently 198 files in the file_db_data directory. And I have one file in the the local maildir where there are supposed to be 1200 files for 1200 messages. So the cache here isn't a cache anymore as akonadi has removed the files from the initial imap store (local), but has NOT written it to the new folder. This means great risk for data loss and high potential for bogus backups, especially if one does not only use kmail2/kontact for email. 
Funnily enough, the emails in the local folder can be accessed, but they are nowhere in the filesystem. They are just in the database. So if the database at some point in time, decides that the status of the local filesystem primes, it will remove all the email it has in the database. At which point I will have lost 1200 emails
Comment 6 Martin Steigerwald 2015-03-12 18:12:31 UTC
Rigo, yes, it doesn´t help to avoid the caching. And if you prefer the file based caching, then reduce the SizeThreshold again, yet for recoverability I think there isn´t that much of a difference as file_db_date directory has all files in one directory and with different names. Sure, you can use grep more easy to dig out the files, so by all means, if you don´t trust the database, reduce the SizeThreshold again.

I think what you ask here for, and Rigo, I fully agree, is: A lower timeout for the write caching. As I wrote already: Have Akonadi write files to their final location more quickly. Not only on moving messages, but generally. The write caching should act like a filesystem journal: Try to write to the final location soon. Sure filesystems with a bigger journal work faster too, yet as you see it here, it takes ages for the files to appear in the final location. Hey, I can check it here as I moved some mails today.

- kernel-ml-2015-1 according to KMail has 47834 unread mails.
- kernel-ml-2014-1 according to KMail has only 27783 unread mails.
- I moved about 30000 mails from the first to the second folder  earlier today (see bug #345085 and bug #345084 for details on what I did and what issues I experienced with it)


So lets check the filesystem:

martin@merkaba:~/.local/share/local-mail/.Lichtvoll.directory/.Linux.directory> find kernel-ml-2014-1 | wc -l
28627
martin@merkaba:~/.local/share/local-mail/.Lichtvoll.directory/.Linux.directory> find kernel-ml-2015-1 | wc -l
49097

Okay, here it meanwhile moved the mails.

Still I have no idea about on when Akonadi will be doing this and how long it will write cache under what circumstances. Additionally:

I still don´t get why they have to go through file_db_data or the database at all. The most efficient implementation I see is this: Store the mails from the IMAP resource directly in the maildir in your case. And move the mails from source to destination database in my case, yet I also saw *huge* write activity to the MySQL database.

Why cache? And why cache to this amount? Well, if the IMAP server can deliver the mails faster than the maildir resource could accept them, but then, Akonadi can still download the mails as fast as the maildir resource could accept them, downloading them faster will not give any benefit as the maildir resource couldn´t store them that fast, and I highly doubt that storing them in file_db_data or MySQL database would be any faster anyway.

I sure hope that Akonadi Next will use a different approach if it leaves proof of concept state.
Comment 7 Martin Steigerwald 2015-03-12 18:15:13 UTC
Well filesystems with bigger journal *can* work faster for different reasons. I am not sure at all whether the write caching would give any performance benefit. And well yes, it if goes through the database it will *duplicate* the amount of writes. And unless Akonadi moves the files directly from file_db_data to the final location it would also duplicate the amount of files in this case as well.
Comment 8 Rigo Wenning 2015-03-12 20:45:33 UTC
Absolutely. I have a 4core 8GB with SSD. The slowest thing on the machine is Akonadi. It now takes 15 seconds to load my calendar as I moved from one ics file to an ics folder for reasons of resilience. 

The initial idea from nepomuk comes from linked data. Because you can't store metadata into the file system. And this metadata (relations to other resources, flags, tags, semantics) is very important. And it is totally cool if it works. But storing everything into the database made akonadi so fragile that not even I trust to put any valuable metadata in there. Because it too often has proven to be a waste of time when the thing crashed and the database was corrupted. So having metadata and a URI that describes the resource it talks about is the initial idea. Unfortunately, it was totally distorted into some caching that does many things, but not this most useful thing. 

BTW, after more than 24 hours, still no writing of files into the local maildir except for this one email. I wonder if Akonadi will ever write them to disk.
Comment 9 Daniel Vrátil 2016-03-22 20:04:59 UTC
This could've been related to bug 339181. Can you test with Applications 16.04 once it's out?
Comment 10 Justin Zobel 2022-10-19 22:10:52 UTC
Thank you for reporting this bug in KDE software. As it has been a while since this issue was reported, can we please ask you to see if you can reproduce the issue with a recent software version?

If you can reproduce the issue, please change the status to "CONFIRMED" when replying. Thank you!
Comment 11 Rigo Wenning 2022-10-20 07:50:07 UTC
You can close this bug. After having lost so much information with the combination of akonadi and kmail2, I've given up after over 20 years using KDEPIM. The frontend of the application is still the most usable you can find. But the backend implementation of the protocol stack and the caching itself are not ready for use if one has more than a very small private email stack. I would even go so far to say that the kmail from KDE3 is still better.
Comment 12 Knut Hildebrandt 2022-10-21 10:53:30 UTC
Well, this seems to be a duplicate of Bug 374925.