Bug 322199 - akonadi_imap spams perdition log when two clients are active and gets restarted all the time
Summary: akonadi_imap spams perdition log when two clients are active and gets restart...
Status: RESOLVED UNMAINTAINED
Alias: None
Product: Akonadi
Classification: Frameworks and Libraries
Component: IMAP resource (show other bugs)
Version: 4.10
Platform: Debian unstable Linux
: NOR normal
Target Milestone: ---
Assignee: Christian Mollekopf
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-07-10 15:15 UTC by Martin Steigerwald
Modified: 2018-02-01 09:49 UTC (History)
6 users (show)

See Also:
Latest Commit:
Version Fixed In:
Sentry Crash Report:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Martin Steigerwald 2013-07-10 15:15:21 UTC
Hi! I migrated KDEPIM from 4.4.11 to 4.10.4 yesterday in my workstation. Today a co-worker reported that my IMAP clients spam our perdition IMAP proxy logs with

Jul 10 16:29:25 rproxy perdition.imaps[15127]: Fatal Error reading authentication information from client clientip:51242->serverip:993: Exiting child

Reproducible: Always

Steps to Reproduce:
It does only seem to happen when two instances of my Akonadi are running - one on laptop and one on workstation (both with local MySQL databases)

1. Have two Akonadiserver with IMAP resource pointed to the same perdition server
2. Monitor perdition log (/var/log/syslog in Debian)

Actual Results:  
I get this:

Jul 10 16:29:25 rproxy perdition.imaps[15127]: Fatal Error reading authentication information from client clientip:51242->serverip:993: Exiting child
Jul 10 16:29:25 rproxy perdition.imaps[15129]: Connect:  clientip:51244->serverip:993
Jul 10 16:29:25 rproxy perdition.imaps[15128]: Fatal Error reading authentication information from client clientip:51243->serverip:993: Exiting child
Jul 10 16:29:25 rproxy perdition.imaps[15130]: Connect:  clientip:51245->serverip:993
Jul 10 16:29:25 rproxy perdition.imaps[15129]: Fatal Error reading authentication information from client clientip:51244->serverip:993: Exiting child
Jul 10 16:29:25 rproxy perdition.imaps[15131]: Connect:  clientip:51246->serverip:993
Jul 10 16:29:25 rproxy perdition.imaps[15130]: Fatal Error reading authentication information from client clientip:51245->serverip:993: Exiting child
Jul 10 16:29:25 rproxy perdition.imaps[15132]: Connect:  clientip:51247->serverip:993
Jul 10 16:29:25 rproxy perdition.imaps[15131]: Fatal Error reading authentication information from client clientip:51246->serverip:993: Exiting child
Jul 10 16:29:25 rproxy perdition.imaps[15133]: Connect:  clientip:51248->serverip:993
Jul 10 16:29:25 rproxy perdition.imaps[15132]: Fatal Error reading authentication information from client clientip:51247->serverip:993: Exiting child
Jul 10 16:29:25 rproxy perdition.imaps[15134]: Connect:  clientip:51249->serverip:993
Jul 10 16:29:25 rproxy perdition.imaps[15133]: Fatal Error reading authentication information from client clientip:51248->serverip:993: Exiting child
Jul 10 16:29:25 rproxy perdition.imaps[15135]: Connect:  clientip:51250->serverip:993
Jul 10 16:29:25 rproxy perdition.imaps[15134]: Fatal Error reading authentication information from client clientip:51249->serverip:993: Exiting child


On client netstat -anpe shows that port numbers of akonadi_imap resource are changing rapidly. It seems to get restarted all the time.

Another indication that this is happening is:

ms@mango:~/.local/share/akonadi> cat akonadi_control.error.old
ProcessControl: Application /usr/bin/akonadi_imap_resource stopped unexpectedly ( "Process crashed" )

(I also got a akonadi_control.error that had the same, but this got deleted again)



Expected Results:  
Some rare occurences of:

Jul 10 16:59:33 rproxy perdition.imaps[15174]: Auth: clientip:53966->servierip:993 client-secure=ssl authorisation_id=NONE authentication_id="my@mail.adress" server="zimbraserverip:7143" protocol=IMAP4 server-secure=plaintext status="ok"

Akonadi resource uses same client port and is running for as long as session is running.

We use perdition 1.19~rc4-2 on Debian Squeeze 6.0.7 with Zimbra Collaboration Server (ZCS) 8.0.4.

Akonadi version on Debian Sid/Experimental client is: 1.9.2-2


I have seen website regarding Akonadi IMAP logging. But no time right now. Next week I hold a training, so can take a while till I can get back to this.
Comment 1 Martin Steigerwald 2013-07-10 15:24:09 UTC
See also

Bug 322200 - limit restarts of crashing resources
Comment 2 Christophe Marin 2013-07-10 17:08:47 UTC
can you stop changing the importance/severity of your bugs please ?
Comment 3 Martin Steigerwald 2013-11-10 10:19:53 UTC
For the record, I never answered this for this one (only for bug #322200: I do believe this bug is more important than normal, so I think my raising of the importance was justified.

I have read on debian-kde mailinglist from two user having hit maximum amount of connection from an IP for an IMAP server. I asked them to comment here:

http://lists.debian.org/debian-kde/2013/11/msg00012.html

Downgrading severity of bugs doesn´t make them any less dangerous. This almost ate our mail proxy server. No amount of denial will help here. This bug is for real. Whether the resource crashes or it just reconnects endlessly, doesn´t matter regarding the effect it makes on the server. Or well if it just reconnects the effect may even be worse, cause it is likely able to make more reconnects within a given amount of time without restarting the akonadi imap resource each time.

I am setting this to confirmed due to above reference.
Comment 4 Rainer Dorsch 2013-11-10 12:15:05 UTC
I confirm that I see this problem from time to time, unfortunately I have no procedure to trigger it, deterministically. I own also the dovecot imap server, i.e. I can provide debug messages from both sides.

When I run into the problem again, is there any specific information I should collect, which may help you to better understand the issue?

Many thanks,
Rainer
Comment 5 Kevin Ottens 2013-11-16 07:27:52 UTC
The IMAP resource has a new maintainer, reassigning to him.
Comment 6 Rainer Dorsch 2013-11-24 16:44:48 UTC
I now run into the issue with each start of akonadi/kmail2.

That is what I see on the server side. Is there anything more I can look at ?

Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=291/2618
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=277/2241
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=243/2361
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=32/676
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=206/2177
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=194/1862
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=193/1991
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Disconnected in IDLE bytes=59/1147
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Connection closed bytes=162/1777
Nov 24 17:37:13 bokomoko dovecot: IMAP(rd): Connection closed bytes=98/1222
Nov 24 17:37:14 bokomoko dovecot: imap-login: Login: user=<rd>, method=PLAIN, rip=80.136.255.33, lip=87.118.104.111, TLS
Nov 24 17:37:16 bokomoko last message repeated 9 times
Nov 24 17:37:16 bokomoko dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections): user=<rd>, method=PLAIN, rip
=80.136.255.33, lip=87.118.104.111, TLS
Nov 24 17:37:47 bokomoko last message repeated 15 times
Nov 24 17:38:18 bokomoko last message repeated 48 times
Comment 7 Martin Steigerwald 2013-12-04 20:16:41 UTC
This may be related to bug #316840. At least the behaviour is similar.

http://git.reviewboard.kde.org/r/113947/
Comment 8 Rainer Dorsch 2013-12-07 09:55:43 UTC
I checked my server settings last week and changed in 
/etc/dovecot/dovecot.conf

575c575
<   #mail_max_userip_connections = 10
---
>   mail_max_userip_connections = 30

The problem did not reappear since then.

Is akonadi opening more parallel connections now? Is that expected?

Rainer
Comment 9 Martin Steigerwald 2014-03-07 12:58:48 UTC
I am currently using a KMail from SC 4.12.2 with Akonadi 1.11.0 via Perdition and do not see that issue anymore. But I have the other KMail directly connected to the IMAP server, not via Perdition. And well Zimbra was replaced with Exchange.

Anyway, I do not see any reconnection loops anymore and thus for me the problem is gone. Rainer, maybe it is fixed for you as well?
Comment 10 Hannes H 2014-10-27 17:42:48 UTC
I can confirm the issue on KDE 4.12.5 and FreeBSD. IMAP server is exchange.
Comment 11 Martin Steigerwald 2014-10-28 09:01:04 UTC
Thanks, Hannes. I currently circumvent this by having the KMail on the workstation going directly to the Exchange IMAP server and only the laptop KMail going through the proxy. I didn't dare to retest as it broke part of our infrastructure last time. Do you have any chance to test this safely with 4.14.2?
Comment 12 Hannes H 2014-10-28 09:26:09 UTC
Ah, I, came here from https://lists.debian.org/debian-kde/2013/11/msg00012.html
little confusion. My symptoms are exactly as decribed there, not exactly as described here, i.e. I only have one desktop instance which causes the problem. And I have no idea whats happening on the server end...

Those problems persist with 4.14.2, I updated yesterday.
Comment 13 Denis Kurz 2017-06-23 20:05:04 UTC
This bug has never been confirmed for a KDE PIM version that is based on KDE Frameworks (5.x). Those versions differ significantly from the old 4.x series. Therefore, I plan to close it in around two or three months. In the meantime, it is set to WAITINGFORINFO to give reporters the oportunity to check if it is still valid. As soon as someone confirms it for a recent version (at least 5.1, ideally even more recent), I'll gladly reopen it.

Please understand that we lack the manpower to triage bugs reported for versions almost two years beyond their end of life.
Comment 14 Denis Kurz 2018-02-01 09:49:59 UTC
Just as announced in my last comment, I close this bug. If you encounter it again in a recent version (at least 5.1 aka 15.12; preferably much more recent), please open a new one unless it already exists. Thank you for all your input.