Version: 4.6 (using KDE 4.6.0) OS: Linux I keep getting really high memory usage (which maybe the cause of my kernel panics) on this nepomukservicestub service. 6116m 5.3g 6392 R 1.9 68.8 4:36.02 /usr/bin/nepomukservicestub nepomukstrigiservice Eventually it uses over 8G of ram. I'm using F15 rawhide so that might have something to do with the problem but wanted to report this in any case. Reproducible: Sometimes Steps to Reproduce: Just working away. Happens on it's own.
Same here on a quite up-to-date Gentoo system with kde-4.6.1. Currently, strigi performs its initial indexing. At the moment it is at about 3G, with constantly increasing core. Actually, I didn't yet get over the initial indexing step. True, the number of files in my home-directory tree is really large, still it doesn't seem correct that the indexer needs so much memory. Or is there some reason that it needs to keep the entire data-base in memory? Otherwise this seems to be a memory leak. Reproducible: At least always during the initial indexing step. Steps to reproduce: nothing special, just start the indexer and wait a couple of hours.
Ok, the indexer process has now a core size of about 13G and is still constantly growing somewhat quickly ... so maybe better call it Egacs: Eight Gigabytes And Constantly Swapping (sorry, just an hommage a Emacs ...; so this comment was somewhat off-topic. I like Emacs, by the way ...). So back to the topic: I think this strongly suggest that there is a memory leak in the indexer, which renders the entire indexing stuff pretty useless for large directories (I know that I can restrict the indexing process to certain interesting directories, but this is not the point). Guess I have a look at the compiler options and see whether mayhaps I was trying to over-optimize.
I may be seeing this as well. F14 with kdebase-runtime-4.6.1-1.fc14.i686. I get crashes of the strigiservice with: [/usr/bin/nepomukservicestub] terminate called after throwing an instance of ' [/usr/bin/nepomukservicestub] std::bad_alloc which would indicate and out of memory condition. May also be a dupe of bug 262625.
Hmm, scratch that dupe suggestion.
Setting to "NEW" as I and others can confirm it. I think this started with KDE 4.6. I didn't see this excessive memory usage with KDE 4.5. Sometimes I have running two KDE 4 sessions on my ThinkPad T42 with Pentium M 1.8 GHz and 2 GB of RAM. With KDE 4.5 this worked, although memory consumption still seemed to be higher. With KDE 4.6 it crawls the machine to almost a halt with swap size of 1 GB. These two atop snippets just mark the beginning of it: ATOP - shambhala 2011/05/12 09:30:00 10 seconds elapsed PRC | sys 1.52s | user 7.17s | #proc 258 | #zombie 0 | #exit ? | CPU | sys 14% | user 72% | irq 1% | idle 0% | wait 13% | CPL | avg1 5.55 | avg5 6.70 | avg15 5.87 | csw 57320 | intr 11062 | MEM | tot 2.0G | free 22.7M | cache 418.0M | buff 157.5M | slab 86.3M | SWP | tot 3.8G | free 3.3G | | vmcom 3.3G | vmlim 4.8G | PAG | scan 309 | stall 0 | | swin 28 | swout 0 | DSK | sda | busy 36% | read 408 | write 276 | avio 5 ms | NET | transport | tcpi 2 | tcpo 2 | udpi 0 | udpo 0 | NET | network | ipi 74 | ipo 2 | ipfrw 0 | deliv 54 | NET | eth0 0% | pcki 436 | pcko 2 | si 24 Kbps | so 1 Kbps | PID MINFLT MAJFLT VSTEXT VSIZE RSIZE VGROW RGROW MEM CMD 1/2 3334 96 2 0K 316.5M 92900K 0K 380K 4% nepomukservice 3342 0 1 0K 107.0M 60300K 0K 4K 3% virtuoso-t 3528 0 0 0K 292.7M 49872K 0K 0K 2% kmail 3490 0 0 0K 234.0M 45648K 0K 0K 2% kontact 20353 255 92 14K 185.9M 35788K 0K 1384K 2% kmail 3596 264 0 0K 128.7M 23524K -752K 0K 1% nepomukservice 19792 0 0 129K 154.1M 11904K 0K 0K 1% knotify4 19762 0 0 41K 175.0M 11076K 0K 0K 1% kded4 19978 0 0 46K 140.8M 10300K 0K 0K 0% okular 3597 0 0 0K 134.7M 4072K 0K 0K 0% nepomukservice 3593 0 0 0K 91592K 3920K 0K 0K 0% nepomukservice 3546 0 0 0K 98.9M 3708K 0K 0K 0% rsibreak ATOP - shambhala 2011/05/12 09:30:30 10 seconds elapsed PRC | sys 0.92s | user 6.19s | #proc 258 | #zombie 0 | #exit ? | CPU | sys 8% | user 62% | irq 0% | idle 0% | wait 29% | CPL | avg1 5.23 | avg5 6.53 | avg15 5.84 | csw 26763 | intr 9585 | MEM | tot 2.0G | free 45.2M | cache 421.4M | buff 127.2M | slab 85.9M | SWP | tot 3.8G | free 3.3G | | vmcom 3.3G | vmlim 4.8G | PAG | scan 2935 | stall 0 | | swin 503 | swout 0 | DSK | sda | busy 37% | read 428 | write 62 | avio 7 ms | NET | transport | tcpi 2 | tcpo 2 | udpi 0 | udpo 0 | NET | network | ipi 63 | ipo 2 | ipfrw 0 | deliv 63 | NET | eth0 0% | pcki 433 | pcko 2 | si 23 Kbps | so 1 Kbps | PID MINFLT MAJFLT VSTEXT VSIZE RSIZE VGROW RGROW MEM CMD 1/2 3334 44 3 0K 317.5M 93636K 0K 144K 5% nepomukservice 19873 0 1 8871K 98744K 83092K 0K 4K 4% virtuoso-t 3342 2330 85 0K 107.0M 63944K 0K 1936K 3% virtuoso-t 3528 0 0 0K 292.7M 49872K 0K 0K 2% kmail 3490 0 0 0K 234.0M 45648K 0K 0K 2% kontact 3596 278 0 0K 129.0M 23984K 0K 0K 1% nepomukservice This is then getting to a point where the Linux kernel has to take massive effort to find free pages and swaps in and out like that. I now even had it, that the mouse pointer was not movable for an extended period of time. I think current Nepomuk/Strigi has a memory leak somewhere. Since I need my laptop for production work, I have disabled desktop search for now.
Just installed a brand-new sandy-bridge system, 16G of RAM, with kde-4.6.3 installed, the bug is still there. Kernel started killing tasks (i.e. nepomuk) after they reached a critical size when the system was running out of VM (note: with 16G RAM and 32G swap). So this really does not seem to be a problem of unsufficient resources ;).
I have the same problem on Gentoo - KDE 4.6.80. Nepomuk and virtuoso eat all my memory (cca 4GB). After kill both of them, memory usage decrease to 1GB and my system responding quickly again.
The earlier memory problem and the one in beta1 are very different. The old one should have gotten fixed with the new separate process architecture. We're trying to find this new memory leak. ( It's something to do with dbus )
Is there a separate bug open? Could I help somehow? I just tried to start nepomukservicestub under valgrid, but nothing significant found.
Fixed this. You can check it out in RC1.
*** Bug 272124 has been marked as a duplicate of this bug. ***