Bug 166379 - kio_nfs memory leak with >210 files in directory
Summary: kio_nfs memory leak with >210 files in directory
Status: RESOLVED FIXED
Alias: None
Product: kio
Classification: Unmaintained
Component: nfs (show other bugs)
Version: unspecified
Platform: Compiled Sources Linux
: NOR normal
Target Milestone: ---
Assignee: Alexander Neundorf
URL:
Keywords:
: 184998 (view as bug list)
Depends on:
Blocks:
 
Reported: 2008-07-12 17:15 UTC by Peter Faasse
Modified: 2010-12-12 21:23 UTC (History)
2 users (show)

See Also:
Latest Commit:
Version Fixed In:
Sentry Crash Report:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Peter Faasse 2008-07-12 17:15:26 UTC
Version:            (using KDE 3.5.9)
Installed from:    Compiled From Sources
Compiler:          gcc-4.2.3 
OS:                Linux

When browsing nfs:/ directories. If i browse a directory that contains more than 210 files (200 goes ok, 210 causes the problem), kio_nfs starts eating all memory. 

NB: This bug was already reported, but marked 'resolved' because the reporter could no longer reproduce the problem. 
(ref: bug number 61047: http://bugs.kde.org/show_bug.cgi?id=61047) I reported my inital findings as remarks for bug #61047. 

I found the bug still 'alive' in kde-3.5.9; both on my 'compiled-from-sources' kde-3.5.9 machine and on Mandriva 2008.1 .
Comment 1 Peter Faasse 2008-07-13 08:37:01 UTC
After some debugging: 

What seems to happen is the following (apologize the clumsy 'pseudocode'); In the file kio_nfs.cpp; 'about' line 564:

564: do {
567:   get_first_batch_of_filenames()
570:   process_the_result()
575: } while (not all files processed)

If the first batch (my last count was 202 filenames) is 'all there is', then the loop will finish; if there 'is more', then the loop will run till we're out of memory. I observed -in gdb- that the 'process_the_result()' starts with processing the '.', '..' and first filename entries a few times.
Comment 2 Peter Faasse 2008-07-13 10:25:46 UTC
I'm not sure if the following would qualify as a 'definite fix', but: i've tested with the following change to the kio_nfs.cpp file, and this seems to work:

In lines 572 .. 574 of the file kio_nfs.cpp, i've added the memcpy statement as shown below:

 if ((QString(".")!=dirEntry->name) && (QString("..")!=dirEntry->name))
       filesToList.append(dirEntry->name);
 memcpy(listargs.cookie, &dirEntry->cookie, sizeof(nfscookie));

The list of files seems complete, and konqueror no longer 'hangs' on the same directory of files that i've tested with before. 

The root of the problem seems to be that the 'cookie' of the listargs parameter to the clnt_call is not updated. As a result, the NFSProtocol::listDir will loop endlessly, requesting only the first set of directory entries over and over again. This does only happen if the first call to clnt_call is not enough to produce an 'eof' situation which would exit the 'outer loop'.
Comment 3 Peter Faasse 2008-08-11 10:40:34 UTC
Am i wasting my time here? No offense intended :-) I found two other little issues with kio_nfs by now, but if nobody is interested, i might as well forget about them.
Comment 4 Alexander Neundorf 2008-08-14 22:09:44 UTC
Well, I'm not really maintaining it anymore.
Reporting the problems here is good nevertheless.
If you are interested, you can become the new maintainer :-)
I'd be happy about that.

Alex

Comment 5 Peter Faasse 2008-08-15 11:39:43 UTC
Ok, i'll have another 'dip' into the source to see if i can find out what's going on with the two other issues: (reporting from memory, not from my written notes... ;-) )

- The first '/' of the nfs:// 'url' must be entered as %25%2f, or else konqueror reports something like 'path not found'. 
Example:

nfs://one/two/three -> gives error
nfs://one%25%2ftwo/three -> is ok

- a copy/paste to an nfs:// dir (konqueror) does not work, it 'stalls', times out, and ends with an empty file on the 'target'.

So, now both the other issues i've seen are at least mentioned somewhere :-0 . 

W.r.t. maintaining the nfs 'slave':

- Is this slave a 'near dodo'? I'm also building kde4 packages, and i've not yet seen anything like nfs:// in kde4 yet. I could start spending some time on debugging/maintaining this kio slave, but if kde-3.5.9/kde-3.5.10 is going to be the last one to actually have this on-board, i hope you'll agree there is little point in spending too much time on it.

- I've been using konqueror as test-program, and -with the debugging of the 210 files issue- that was enough to get some idea of what was going on: the error itself was such that i could 'attach' gdb to the slave process, and trace till i got some notion of what was going on. For the other two issues, that may not be the most opportune approach. Is there something like a 'test-jig' for kio slaves? Some app that i could use to try out the functions that the kio slave provides in a more systematic way?
Comment 6 Peter Faasse 2008-08-27 20:01:45 UTC
A minor correction/clarification of the previously-reported url for an nfs-browse using konqueror: When i export the directory /home/packages on my computer (computername = dr108), then to browse the nfs mount, i end up using the url:

nfs://dr108.local:2049/home%252fpackages

I used avahi/zeroconf/kdnssd-avahi to 'get there'. 

+ start browsing with the url 

  zeroconf:/

  in konqueror.

+ browse to 'nfs'
+ browse to dr108
+ browse to /home/packages

in the 'Location' line of konqueror, the %252f url appears, and i do get the correct list of subdirectories of /home/packages in the konqueror window.

If i change the 'url' to:

nfs://dr108.local:2049/home/packages

i get the error message:

An error occurred while loading nfs://dr108.local:2049/home/packages:
The file or folder /home/packages does not exist.
Comment 7 Pino Toscano 2009-02-25 12:21:26 UTC
*** Bug 184998 has been marked as a duplicate of this bug. ***
Comment 8 Michael Zanetti 2010-12-12 21:23:56 UTC
fixed in rev 1205850:

make use of the cookie of the last entry when listing directories with multiple READDIR calls

This fixes an infinite READDIR loop when brosing directories containing too many files for one call