Version: unknown (using 4.4.00 (KDE 4.4.0) "release 222", KDE:KDE4:Factory:Desktop / openSUSE_11.2) Compiler: gcc OS: Linux (x86_64) release 2.6.31.12-0.1-desktop During initial indexing of my home directory, nepomukservicestub nepomukfilewatch grows to well over 2GByte of memory, rendering the system unusable due to constant swapping.
I can confirm that this service is eating a lot of memory during indexing. At this time I can't say if this returns to normal if the indexing task is finished.
Can also confirm: # ps -e -o args -o vsz | grep nepomuk kdeinit4: nepomukserver [kd 315304 /usr/bin/nepomukservicestub 408108 /usr/bin/nepomukservicestub 186376 /usr/bin/nepomukservicestub 176132 /usr/bin/nepomukservicestub 690664 /usr/bin/nepomukservicestub 2763740 /usr/bin/nepomukservicestub 218376 /usr/bin/nepomukservicestub 254672 grep nepomuk 7396 /usr/bin/akonadi_nepomuk_co 222776 See also http://forum.kde.org/viewtopic.php?f=154&t=85457&start=10
# uname -a Linux linfinit 2.6.31.12-0.1-desktop #1 SMP PREEMPT 2010-01-27 08:20:11 +0100 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/SuSE-release openSUSE 11.2 (x86_64) VERSION = 11.2 # rpm -q -f /usr/bin/nepomukservicestub kdebase4-runtime-4.4.1-192.1.x86_64
why is this problem not confirmed? can reproduce this here. everytime strigi is indexing it eats up a lot of memory here and my system has to swap.
This bugs has been confirmed by many people and I can reproduce it, too. I suspect that you mean that you want the bug status to change. So I am doing that.
I actually seemed to stumble across some interesting interactions and behaviour in running a 'find / sudo' after enabling compcache after every bootup. Access times seem to be updated and the entire filesystem 'ledger' is stored in compressed ram. What I did notice is reduced ram requirements in clementine as versus Amarok and of course in the filewatch daemon. Probably this is similar in scope to prelinking on Gentoo as far as feel goes, snake-oil-like. It may expose some workarounds for kernel behaviour and / or in Nepomuk behavior.
(In reply to comment #6) > I actually seemed to stumble across some interesting interactions and behaviour > in running a 'find / sudo' after enabling compcache after every bootup. > > Access times seem to be updated and the entire filesystem 'ledger' is stored in > compressed ram. What I did notice is reduced ram requirements in clementine as > versus Amarok and of course in the filewatch daemon. > > Probably this is similar in scope to prelinking on Gentoo as far as feel goes, > snake-oil-like. It may expose some workarounds for kernel behaviour and / or in > Nepomuk behavior. Improvements are to be noticed in shared memory mostly. Amarok isn't privy to shared memory on file handles anyway and so it doesn't show much improvement there..not being inclined to patch or dig any deeper into what calls are at issue, I used Amarok as a hardline reference and played around with Nepomuk / Clementine for a bit to assess file-handle economics. I'm still surprised how well compcache functions as a 'phreak'-like application; I haven't felt like I was hacking as such since my Atari ST days.
I can confirm this bug does affect me too. Linux tom-laptop 2.6.33-020633-generic #020633 SMP Thu Feb 25 10:10:03 UTC 2010 x86_64 GNU/Linux Kubuntu 10.04
I can confirm memory leak of nepomukservicestub too, often with virtuoso-t. It started after upgrading my system to Lucid Lynx KDE Platform Version: 4.4.2 (KDE 4.4.2) Qt Version: 4.6.2 Operating System: Linux 2.6.32-22-generic i686 Distribution: Ubuntu 10.04 LTS
Can confirm this bug too....
Confirmed on Kubuntu 10.04 with KDE 4.4.2
Might be useful: http://thread.gmane.org/gmane.comp.kde.devel.core/63824 A mailing list thread where Sebastian asks for help in diagnosing the memory leak. Sebastian, did you try massif like suggested over there by Andreas Hartmetz?
BTW, Sebastian, could it be related to the dangling metadata graphs that you wrote about (http://trueg.wordpress.com/2010/02/01/dangling-meta-data-graphs-caution-very-technical/)? On my machine, the query shows that I have 11717 such graphs: $ nepomukcmd query 'select count(?mg) where { ?mg nrl:coreGraphMetadataFor ?g . OPTIONAL { graph ?g { ?s ?p ?o . } . } . FILTER(!BOUND(?s)) . }' callret-0 -> "11717"^^<http://www.w3.org/2001/XMLSchema#int> Total results: 1 Execution time: 00:00:08.58 Is it a large number?
http://trac.pcbsd.org/changeset/7147 A BSD changeset related to CPU usage. While it's all accounted for, who can say what whale jumps? Degressing, how is this issue seen differently on BSD?
It seems the memory leak was magically fixed by some changes unrelated to the search for this bug. Too bad we never figured out the real reason. It is still good to know the leak is gone.
*** Bug 231409 has been marked as a duplicate of this bug. ***
*** Bug 235377 has been marked as a duplicate of this bug. ***
Unfortunately it seems to be back in KDE 4.7 (or in Kubuntu 11.10 to be more correct). nepomukservicestub just runs my 4GB ram system at work out of memory, had do disable Nepomuk completely to get the memory back.
Git commit 4f7a5d19da26af282f640c913afccad26000a388 by Sebastian Trueg. Committed on 24/10/2011 at 17:47. Pushed by trueg into branch 'KDE/4.7'. Run the MetaDataMover with an event loop. It is using the exact same approach as the file indexer does: a new thread is created and started and the MetaDataMover is then QObject::moveToThread'ed to it. This fixes mem leaks caused by DBus events that are not cleaned up. BUG: 226676 FIXED-IN: 4.7.3 M +70 -63 nepomuk/services/filewatch/metadatamover.cpp M +13 -10 nepomuk/services/filewatch/metadatamover.h M +9 -10 nepomuk/services/filewatch/nepomukfilewatch.cpp M +3 -1 nepomuk/services/filewatch/nepomukfilewatch.h http://commits.kde.org/kde-runtime/4f7a5d19da26af282f640c913afccad26000a388
Git commit 90afde5b3a488f89be2349085971655e6497f1bc by Sebastian Trueg. Committed on 24/10/2011 at 17:00. Pushed by trueg into branch 'master'. Run the MetaDataMover with an event loop. It is using the exact same approach as the file indexer does: a new thread is created and started and the MetaDataMover is then QObject::moveToThread'ed to it. This fixes mem leaks caused by DBus events that are not cleaned up. BUG: 226676 M +70 -63 services/filewatch/metadatamover.cpp M +13 -10 services/filewatch/metadatamover.h M +9 -10 services/filewatch/nepomukfilewatch.cpp M +3 -1 services/filewatch/nepomukfilewatch.h http://commits.kde.org/nepomuk-core/90afde5b3a488f89be2349085971655e6497f1bc
Git commit 0f511184ae25364618ba244f6afda5570b02c388 by Sebastian Trueg. Committed on 24/10/2011 at 17:47. Pushed by trueg into branch 'master'. Run the MetaDataMover with an event loop. It is using the exact same approach as the file indexer does: a new thread is created and started and the MetaDataMover is then QObject::moveToThread'ed to it. This fixes mem leaks caused by DBus events that are not cleaned up. BUG: 226676 M +70 -63 nepomuk/services/filewatch/metadatamover.cpp M +13 -10 nepomuk/services/filewatch/metadatamover.h M +9 -10 nepomuk/services/filewatch/nepomukfilewatch.cpp M +3 -1 nepomuk/services/filewatch/nepomukfilewatch.h http://commits.kde.org/kde-runtime/0f511184ae25364618ba244f6afda5570b02c388
Looks like this is back in 4.9 (I'm currently using RC1+ from git here).
Created attachment 72428 [details] massif output This is the massif output after running a few minutes
(In reply to comment #22) > Looks like this is back in 4.9 (I'm currently using RC1+ from git here). I opened bug #304476 against KDE 4.9.0.
(In reply to comment #23) > Created attachment 72428 [details] > massif output > > This is the massif output after running a few minutes Thanks a lot. It helped in diagnosing the problem, and I think I found another way I can reduce the memory footprint by about 10 mb. Could you please test the patch given over here - https://bugs.kde.org/show_bug.cgi?id=304476