Bug 321944 - synchronizing large about 77000 mail Linux kernel mailing list folder blocks out KMail for more than 15 minutes
Summary: synchronizing large about 77000 mail Linux kernel mailing list folder blocks ...
Status: RESOLVED FIXED
Alias: None
Product: Akonadi
Classification: Frameworks and Libraries
Component: Maildir Resource (show other bugs)
Version: 1.9.2
Platform: Debian unstable Linux
: NOR normal
Target Milestone: ---
Assignee: Sergio Martins
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-07-04 09:31 UTC by Martin Steigerwald
Modified: 2013-11-23 14:58 UTC (History)
4 users (show)

See Also:
Latest Commit:
Version Fixed In:
Sentry Crash Report:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Martin Steigerwald 2013-07-04 09:31:28 UTC
Due to bugs 319226 and 320041 I still filter my mails manually. I just let them flow into a special inbox folder, weed out spam manually and then press Ctrl-A and Ctrl-J to manually trigger filtering.

This worked quite well. But now Linux/kernel-ml folder has about 77000 mails and synchronizing it takes *ages*.  During this synchronization KMail is almost unusable. It takes minutes for a mail to display or delete.

Reproducible: Always

Steps to Reproduce:
1. Have a high performance system with Sandybridge i5 2520-M at 2,5 GHz and Intel SSD 320 or similar
2. Have a maildir setup with a kernel mailing list folder of about 77000 mails
3. Have it download and filter mail after a day of non usage.
Actual Results:  
merkaba:~> atopsar -O -b 0:00 -e 1:00 

merkaba  3.10.0-tp520  #18 SMP PREEMPT Tue Jul 2 09:41:49 CEST 2013  x86_64  2013/07/04

-------------------------- analysis date: 2013/07/04 --------------------------

00:00:02    pid command  cpu% |   pid command  cpu% |   pid command  cpu%_top3_
00:10:02   7246 akonadi_   4% |  7191 mysqld     4% | 15622 kmail      2%
00:20:02  15622 kmail      2% |  1929 Xorg       2% |  2464 kwin       1%
00:30:02   7234 akonadi_   9% |  7191 mysqld     9% | 15622 kmail      3%
00:36:21   7234 akonadi_  94% |  7191 mysqld    29% |  7189 akonadis  19%

hibernation cycle

merkaba:~> atopsar -O -b 10:30 -e 11:30

merkaba  3.10.0-tp520  #18 SMP PREEMPT Tue Jul 2 09:41:49 CEST 2013  x86_64  2013/07/04

-------------------------- analysis date: 2013/07/04 --------------------------

10:51:12    pid command  cpu% |   pid command  cpu% |   pid command  cpu%_top3_
11:01:12   7234 akonadi_  94% |  7191 mysqld    25% |  7189 akonadis  16%
11:11:12   7234 akonadi_  95% |  7191 mysqld    34% |  7189 akonadis  16%


atopsar measures *averages*. So that means that on 11:11:12 akonadi maildir resource agent used up 94% on average for 10 minutes! And before that for further 10 minutes and before that before hibernating for 6 minutes.

So 26 minutes of using one core for synchronizing 77000 mails.

26 minutes while KMail is almost completely unusable. Filtering almost hangs, accessing mail takes minutes and so on.



Expected Results:  
1. KMail remains snappy whatever Akonadi is doing in the background.

2.  The CPU usage should match the task at hand. If our Zimbra server at work took 26 minutes for synchronizing a mail folder of 77000 mails it wouldn´t to much else. I have a kernel-ml folder there of above >330000 mails. And there are upto 45 other users on the *same* server, not just me. But anyway, it follows point one closely, so I do not even care to much at how much time it is spending on background tasks. But when I see that on accessing the folder the contents appear recent I think it is *way faster*.

I only set local mail folders maildir resource to synchronize at start of KMail only and not at every POP3 download as well (cause thats unusable slow then). So as I have stopped and started Akonadi yesterday in order to save some memory for playing PlaneShift, it may have been that initial maildir sync. I think I will disable this one as well as a work-around. Since only Akonadi accesses those maildir folders I do not get why it needs to synchronize any folders at all. If Akonadi moves a mail I expect it to know what it has done.


I optimized MySQL a bit:

# memory buffer InnoDB uses to cache data and indexes of its tables (default:128M)
# Larger values means less I/O
# HINT: Raised from 80 MiB to a huge 500 MiB to see whether it makes a difference, 2.5.2013
# HINT: Lowered from 500 MiB to 200 MiB, the original value to try I thought about, 3.5.2013
innodb_buffer_pool_size=200M

It has been at a ridicolously low 64 MB. But since its not MySQL or I/O load spiking out here, I think the current value works quite okay.


Database size is:

martin@merkaba:~/.local/share/akonadi/db_data/akonadi> ls -lh
insgesamt 1,2G
-rw-rw---- 1 martin martin 8,5K Mai  1 16:07 collectionattributetable.frm
-rw-rw---- 1 martin martin 128K Mai 20 13:07 collectionattributetable.ibd
-rw-rw---- 1 martin martin 8,5K Mai  1 16:07 collectionmimetyperelation.frm
-rw-rw---- 1 martin martin 144K Jul  1 22:47 collectionmimetyperelation.ibd
-rw-rw---- 1 martin martin 8,5K Mai  1 16:07 collectionpimitemrelation.frm
-rw-rw---- 1 martin martin 112K Mai  1 16:07 collectionpimitemrelation.ibd
-rw-rw---- 1 martin martin  42K Mai  1 16:07 collectiontable.frm
-rw-rw---- 1 martin martin 160K Jul  4 11:29 collectiontable.ibd
-rw-rw---- 1 martin martin   61 Mai  1 16:07 db.opt
-rw-rw---- 1 martin martin 8,4K Mai  1 16:07 flagtable.frm
-rw-rw---- 1 martin martin 112K Mai 20 11:55 flagtable.ibd
-rw-rw---- 1 martin martin 8,4K Mai  1 16:07 mimetypetable.frm
-rw-rw---- 1 martin martin 112K Mai  1 16:07 mimetypetable.ibd
-rw-rw---- 1 martin martin 8,6K Mai  1 16:07 parttable.frm
-rw-rw---- 1 martin martin 1,1G Jul  4 11:29 parttable.ibd
-rw-rw---- 1 martin martin 8,5K Mai  1 16:07 pimitemflagrelation.frm
-rw-rw---- 1 martin martin  20M Jul  4 11:10 pimitemflagrelation.ibd
-rw-rw---- 1 martin martin 8,7K Mai  1 16:07 pimitemtable.frm
-rw-rw---- 1 martin martin  92M Jul  4 11:29 pimitemtable.ibd
-rw-rw---- 1 martin martin 8,5K Mai  1 16:07 resourcetable.frm
-rw-rw---- 1 martin martin 112K Mai 20 11:27 resourcetable.ibd
-rw-rw---- 1 martin martin 8,4K Mai  1 16:07 schemaversiontable.frm
-rw-rw---- 1 martin martin  96K Mai  1 16:07 schemaversiontable.ibd

martin@merkaba:~/.local/share/akonadi/db_data> ls -lh ib*
-rw-rw---- 1 martin martin 114M Jul  4 11:29 ibdata1
-rw-rw---- 1 martin martin  64M Jul  4 11:29 ib_logfile0
-rw-rw---- 1 martin martin  64M Jul  3 15:03 ib_logfile1
Comment 1 Martin Steigerwald 2013-07-04 09:41:56 UTC
martin@merkaba:~> ps aux | grep 7234 | grep -v grep
martin    7234  1.5  1.9 466260 152452 ?       Sl   Jul03  32:58 /usr/bin/akonadi_agent_launcher akonadi_maildir_resource akonadi_maildir_resource_0

I think I try archiving mails to mbox´es with mixed mail dir resource just as I did with KMail 1. I wonder that Akonadi will do if I through the existing archive of more than a million of mails at it.
Comment 2 Sergio Martins 2013-07-04 09:58:47 UTC
I would like to test this.

Is there an easy way for me to get such a big folder ?
Comment 3 Martin Steigerwald 2013-07-04 10:23:36 UTC
Well subscribe the Linux kernel mailing list. Anyway, I am currently tar --xz -cf´ing the folder. I will send you a mail where I placed it. It shouldn´t contain any private stuff, but I still feel more comfortable sharing it with you directly. I would upload it somewhere onto my server.

If thats not enough I could try the same with my complete Linux and/or Debian folders which contain huge amounts of mails. :)

Thanks,
Martin
Comment 4 Martin Steigerwald 2013-07-04 10:54:23 UTC
Sergio, thank you for your interesting to look into this.

I sent you a mail with link to upload. Note: Akonadi stuff and maildir is on BTRFS on my system, but I pretty much think that does not matter here, as the workload is clearly not I/O bound:

merkaba:~> atopsar -d -b 0:00 -e 1:00 

merkaba  3.10.0-tp520  #18 SMP PREEMPT Tue Jul 2 09:41:49 CEST 2013  x86_64  2013/07/04

-------------------------- analysis date: 2013/07/04 --------------------------

00:00:02  disk           busy read/s KB/read  writ/s KB/writ avque avserv _dsk_
00:10:02  sdb              0%    0.0     0.0     0.0     0.0   0.0   0.00 ms
          sda              2%   66.8    34.3    17.6    16.1  12.0   0.26 ms
00:20:02  sda              0%    0.1     4.0     9.5     8.2  18.9   0.07 ms
00:30:02  sda              1%   27.7    22.2    16.8    18.4   8.7   0.28 ms
00:36:21  sda              1%   32.0    12.8    24.6    20.3  12.4   0.18 ms

merkaba:~> atopsar -d -b 10:30 -e 11:30

merkaba  3.10.0-tp520  #18 SMP PREEMPT Tue Jul 2 09:41:49 CEST 2013  x86_64  2013/07/04

-------------------------- analysis date: 2013/07/04 --------------------------

10:51:12  disk           busy read/s KB/read  writ/s KB/writ avque avserv _dsk_
11:01:12  sdb              0%    0.0     0.0     0.0     0.0   0.0   0.00 ms
          sda              6%  225.2    35.9    28.4    27.1  23.3   0.25 ms
11:11:12  sda              2%   70.5    27.3    27.9    21.2  23.2   0.19 ms
11:21:12  sda              7%  217.4    22.9    24.8    31.7  11.0   0.29 ms

sda is the primary Intel SSD 320 where data resided. sdb is a new Intel mSATA SSD that I do not yet use.
Comment 5 Sergio Martins 2013-07-04 11:45:47 UTC
Just killed it, it started to use 2GB of memory.

How's memory consumption for you ?
Comment 6 Martin Steigerwald 2013-07-04 13:58:31 UTC
Hmmm, I didn´t look closely. I will have a look with atop data from today.

2013/07/04  00:30:02
  PID    TID  MINFLT  MAJFLT  VSTEXT  VSLIBS   VDATA  VSTACK   VSIZE   RSIZE   VGROW   RGROW  SWAPSZ  RUID      EUID       MEM   CMD       1/15
7234      -    1729     299     76K  67972K  222.2M    136K  455.3M  142.9M      0K   8164K  15212K  martin    martin      2%   akonadi_agent_

15622      -   14418      11     16K  128.0M    1.5G    220K    2.1G  127.8M  87152K    -76K   4400K  martin    martin      2%   kmail


Resident Set Size 142 MB for the agent and 127 MB for kmail.

2013/07/04  10:51:12 and from there in 10 minute steps:

  PID    TID  MINFLT  MAJFLT  VSTEXT  VSLIBS   VDATA  VSTACK   VSIZE   RSIZE   VGROW   RGROW  SWAPSZ  RUID      EUID       MEM   CMD       1/15
 7234      -  260080    3172     76K  67972K  222.2M    136K  455.3M  102.8M  455.3M  102.8M  56260K  martin    martin      1%   akonadi_agent_
 7234      -   93235     363     76K  67972K  222.2M    136K  455.3M  146.2M      0K  44496K  11860K  martin    martin      2%   akonadi_agent_
 7234      -   74297      14     76K  67972K  222.3M    136K  455.5M  147.9M    128K   1760K  10732K  martin    martin      2%   akonadi_agent_
 7234      -   14661      19     76K  67972K  222.2M    136K  455.3M  148.9M   -128K    952K   9652K  martin    martin      2%   akonadi_agent_

Again about 148 MB RSIZE.

I put the atop binary log aside for further analysis if needed.
Comment 7 Martin Steigerwald 2013-07-04 13:59:32 UTC
I think first RGROW is large due to waking up from hibernation. Maybe hibernating triggered swapping akonadi mail dir agent out.
Comment 8 Martin Steigerwald 2013-07-06 14:27:31 UTC
Sergio, one idea regarding memory usage, I had mail indexing disabled as that happens, so maybe try adding the folder with mail indexing disabled gives a lower memory usage? Thanks, Martin
Comment 9 András Manţia 2013-11-17 11:17:18 UTC
Sergio, if you could test  http://pastebin.com/5CamSj5b would be great.
Comment 10 Sergio Martins 2013-11-17 15:50:43 UTC
(In reply to comment #9)
> Sergio, if you could test  http://pastebin.com/5CamSj5b would be great.

This paste has been removed
Comment 11 András Manţia 2013-11-18 11:34:00 UTC
Patch posted as http://git.reviewboard.kde.org/r/113918/ .
Comment 12 Martin Steigerwald 2013-11-18 13:44:05 UTC
András and Sergio, thanks for looking into this. In case you want to try with an even larger data set I now have 150000+ mails in that folder that I could pack together and put onto my server. Actually with KDEPIM SC 4.11.3 and Akonadi 1.10.2 it even works quite well here. Scalabity issues are more with the list view in KMail, especially with the thread sorting at the moment.

Right now while accessing the folder KMail seems to be responsive otherwise. Lets see. I think you actually may have done some improvements there already.
Comment 13 András Manţia 2013-11-18 16:18:36 UTC
Git commit 6100bbf86a5ddf5ae009fb6724d4d2ac53593c7f by Andras Mantia.
Committed on 18/11/2013 at 16:18.
Pushed by amantia into branch 'master'.

Use QDirIterator for listing the maildir folders. This:
- avoids some extra stats
- avoids calling into Maildir::findRealKey that caches the keys and uses a lot of memory for no real reason (in this case)

REVIEW: 113918

M  +16   -0    resources/maildir/libmaildir/maildir.cpp
M  +6    -0    resources/maildir/libmaildir/maildir.h
M  +32   -23   resources/maildir/retrieveitemsjob.cpp
M  +3    -3    resources/maildir/retrieveitemsjob.h

http://commits.kde.org/kdepim-runtime/6100bbf86a5ddf5ae009fb6724d4d2ac53593c7f
Comment 14 Christoph Feck 2013-11-19 01:04:25 UTC
Will this be merged/backported into 4.11/4.12 release branches? If not, it would be released starting with KDE 4.13.
Comment 15 András Manţia 2013-11-19 06:41:39 UTC
I will backport it, just didn't want to do right away before some more feedback and testing.
Comment 16 András Manţia 2013-11-23 14:58:42 UTC
Git commit d161e37622f8acb550d54cacd48f4886354fc1e5 by Andras Mantia.
Committed on 18/11/2013 at 16:18.
Pushed by amantia into branch 'KDE/4.12'.

Use QDirIterator for listing the maildir folders. This:
- avoids some extra stats
- avoids calling into Maildir::findRealKey that caches the keys and uses a lot of memory for no real reason (in this case)

REVIEW: 113918
(cherry picked from commit 6100bbf86a5ddf5ae009fb6724d4d2ac53593c7f)

M  +16   -0    resources/maildir/libmaildir/maildir.cpp
M  +6    -0    resources/maildir/libmaildir/maildir.h
M  +32   -23   resources/maildir/retrieveitemsjob.cpp
M  +3    -3    resources/maildir/retrieveitemsjob.h

http://commits.kde.org/kdepim-runtime/d161e37622f8acb550d54cacd48f4886354fc1e5