Bug 289932 - virtuoso_t eats up cpu
Summary: virtuoso_t eats up cpu
Status: RESOLVED FIXED
Alias: None
Product: nepomuk
Classification: Miscellaneous
Component: general (show other bugs)
Version: 4.8
Platform: Arch Linux Linux
: NOR normal
Target Milestone: ---
Assignee: Sebastian Trueg
URL:
Keywords:
: 281653 291948 292517 296372 (view as bug list)
Depends on:
Blocks:
 
Reported: 2011-12-27 16:11 UTC by Roland Leißa
Modified: 2012-11-11 16:48 UTC (History)
48 users (show)

See Also:
Latest Commit:
Version Fixed In: 4.8.2


Attachments
threadsanitizer log (44.59 KB, text/plain)
2012-01-26 08:52 UTC, Graham Anderson
Details
configuration diff for described workaround (1.28 KB, patch)
2012-02-07 19:46 UTC, Johannes Huber
Details
Patch against kdepim-runtime 4.8 (6.94 KB, patch)
2012-02-13 16:33 UTC, Sebastian Trueg
Details
Screenshot of akonani console (13.73 KB, image/jpeg)
2012-02-14 12:59 UTC, Wolfgang Mader
Details
Patch against kdepim-runtime (18.60 KB, patch)
2012-02-15 13:09 UTC, Sebastian Trueg
Details
Patch against kde-runtime (branch KDE/4.8) (5.54 KB, patch)
2012-03-08 19:02 UTC, Sebastian Trueg
Details
3 bactraces of virtuoso_t when it consumes ~100% of CPU (17.87 KB, application/octet-stream)
2012-04-16 14:52 UTC, Jirka Klimes
Details
New crash information added by DrKonqi (5.25 KB, text/plain)
2012-11-11 16:34 UTC, Pascal Maillard
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Roland Leißa 2011-12-27 16:11:56 UTC
Version:           4.8 (using Devel) 
OS:                Linux

virtuoso_t eats up 50-90% cpu time.
only way to fix this is to completely disable nepomuk semantic desktop.

Reproducible: Always

Steps to Reproduce:
1. "Enable Nepomuk Semantic Desktop"


Expected Results:  
sane cpu workload

same result when email indexer and file indexer are disabled.
Comment 1 bhonermann 2011-12-31 14:04:25 UTC
Have the same problem.

Kubuntu 11.10 with KDE 4.8 RC.
Comment 2 H.H. 2012-01-10 20:15:49 UTC
I have the same problem with kde-4.8rc2 installed from opensuse obs packages.
workload does not stop, have to disable nepomuk in settings and kill virtuoso-process manually to stop.

If you need more info to fix the issue, please tell me what I need to report. Would be terrible to have this for final 4.8 release (previously kde-4.7 worked fine).

I don't use strigi, although it was reenabled after upgrade to kde-4.8!?
Comment 3 bhonermann 2012-01-10 20:24:38 UTC
Have seemingly sorted the problem on my install of 4.8 RC2, though it requires allowing Nepomuk to re-index everything.

1. Turn off Nepomuk and kill any remaining virtuoso-t processes

2. Delete (or move) ~/.kde/share/apps/nepomuk folder

3. Delete (or move) ~/.kde/share/config/nepomuk* configuration files

4. Restart Nepomuk.

It's still indexing now, but even while indexing, virtuoso-t is only peaking at about 10%.  I did notice that a few files inside the nepomuk folder were owned by root.  Not sure what may have happened in the migration, but seemed odd that root would own anything there.
Comment 4 Roland Leißa 2012-01-10 23:56:43 UTC
removing all nepomuk files solved this issue for me.
Comment 5 H.H. 2012-01-11 14:28:55 UTC
I did not have success with cleaning nepomuk files.

when nepomuk is active (and virtuso 100% cpu) I got error messages in .xsession_errors, some syntax errors on sparql queries which seem to be related to my last kmail-search.

when nepomuk is inactive my .xsession_errors file is flooded by messages like

"/usr/bin/kactivitymanagerd(2839)" Soprano: "QLocalSocket::connectToServer: Invalid name"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "org.freedesktop.DBus.Error.ServiceUnknown - The name org.kde.nepomuk.services.nepomukstorage was not provided by any .service files"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "Unsupported operation (2)": "Invalid model"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "org.freedesktop.DBus.Error.ServiceUnknown - The name org.kde.nepomuk.services.nepomukstorage was not provided by any .service files"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "Unsupported operation (2)": "Invalid model"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "Unsupported operation (2)": "Invalid model"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "Invalid iterator."
"/usr/bin/kactivitymanagerd(2839)" Soprano: "org.freedesktop.DBus.Error.ServiceUnknown - The name org.kde.nepomuk.services.nepomukstorage was not provided by any .service files"
QObject: Cannot create children for a parent that is in a different thread.
(Parent is Soprano::Client::LocalSocketClient(0x7ab058), parent's thread is QThread(0x628440), current thread is RankingsUpdateThread(0x86ea10)
"/usr/bin/kactivitymanagerd(2839)" Soprano: "QLocalSocket::connectToServer: Invalid name"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "org.freedesktop.DBus.Error.ServiceUnknown - The name org.kde.nepomuk.services.nepomukstorage was not provided by any .service files"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "Unsupported operation (2)": "Invalid model"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "org.freedesktop.DBus.Error.ServiceUnknown - The name org.kde.nepomuk.services.nepomukstorage was not provided by any .service files"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "Unsupported operation (2)": "Invalid model"
"/usr/bin/kactivitymanagerd(2839)" Soprano: "Unsupported operation (2)": "Invalid model"
"/usr/bin/
Comment 6 H.H. 2012-01-23 23:13:07 UTC
deactivating some krunner plugins solved it for me
Comment 7 Graham Anderson 2012-01-26 08:52:12 UTC
Created attachment 68187 [details]
threadsanitizer log

I see this cpu thrashing in 4.8 release. I ran it under valgrind with threadsanitizer and nepomuk feeders/indexers enabled to make sure codepaths were used.

There's a few candidates for race conditions, log attached.
Comment 8 Mark 2012-01-28 16:46:54 UTC
I "thought" I has the same issue. The result certainly is a constant 6% cpu usage for virtuoso. However, when suspending indexing that dropped to 0%.

I wish there was an option to just let virtuose/nepomuk/whatever use all CPU and disc for some set time to index everything fast! Instead of just slowly indexing but constantly CPU fluctuations.

Funny thing is that once i resumed indexing the "nepomukcontroller" (or something like it) crashed :p
Comment 9 Gunter Ohrner 2012-01-28 18:19:29 UTC
Mh, resetting the Nepomuk configuration as described in comment 3 does not really seem to help for me.

(Running KDE 4.8.0 @ Kubuntu 11.10.)
Comment 10 Franz Trischberger 2012-01-28 18:59:52 UTC
Resetting helped. The same issue popped up when upgrading from kde-4.6.4 to kde-4.7.0. The only solution - deleting the whole nepomuk-storage. Also some updates before had issues with virtuoso eating up CPU or Memory.
Luckily I never made metadata an important feature in my workflow. No rating, no keywords.
Comment 11 Alex 2012-01-28 22:24:06 UTC
Resetting the Nepomuk configuration did not work for me. Only way that works is disabling Nepomuk completely.
Comment 12 Alex 2012-01-28 22:24:43 UTC
*** This bug has been confirmed by popular vote. ***
Comment 13 Diego Viola 2012-01-29 03:59:18 UTC
I've had this issue with KDE 4.8.0 (Arch Linux) x86-64.

Disabling Strigi/Nepomuk and killing the process (virtuoso-t) worked as a workaround for me.
Comment 14 Gunter Ohrner 2012-01-29 08:41:16 UTC
I cannot currently seem to derive any reliable rules from Nepomuk's / virtuoso's behaviour.

After removing Nepomuk's data store and configuration files, and disabling file- and email-indexing, virtuoso still started to run and eat 100% CPU (niced). 

However, this time after working for a while, it seemed to finish with whatever it was doing and started to sleep.

It will wake up from time-to-time (pretty frequently, actually) and process its next chunk of work.

I then tried to disable Nepomuk completely, which terminated the running virtuoso-t process.

However, now dbus-daemon and akonadi-nepomuk start to consume CPU from time to time, also 100%, but not niced...

I'm a bit at a loss trying to understand what's happening there... :-)
Comment 15 Wolfgang Mader 2012-01-29 14:04:26 UTC
I have the same issue on Archlinux x86_64 with kde 4.8. I have a dual core machine and both cores are used with 100% from virtuoso-t processes. I had never installed beta packages, just followed the 4.7->4.8 update path.

Will I lose all my meta data if I wipe nepomuk folder clean as sugested in Comment #3? If so, I would love to see another solution. At the moment, the only way to use my machine is to disable Nepomuk and friends.

If you need further information I am happy to provide if I can.
Comment 16 Toralf Förster 2012-01-29 15:59:31 UTC
I don't have any virtuoso-t processes running, but my .xsession-errors is spamed by kactivitymanagerd too : https://bugs.gentoo.org/show_bug.cgi?id=401155
Comment 17 Wolfgang Mader 2012-01-29 16:21:59 UTC
I can confirm Comment #16. Even now that Nepomuk is disabled, debug messages are written to .xsession-errors.
Comment 18 Ralph Moenchmeyer 2012-02-01 17:51:00 UTC
I upgraded to KDE 4.8 on an Opensuse 12.1-system (x86_64). I followed the advice of comment #3 and it worked very well for me.    

One thing I did in addition was to delete all akonadi_nepomuk_* files in the ~/.kde/share/config/ - folder, too. 

The deleted config files and the nepomuk directory were regenerated, after activating nepomuk again. 

Since then I do get a reindexing of files, mails and other things, but with a much, much lower cpu load.  

Furthermore, the virtuso-t process now seems to behave adaptive: when i pause doing something it starts getting more active and causes higher cpu loads - but the cpu consumption is reduced again substantially when I really work with an application on the KDE desktop.
Comment 19 Alejandro Nova 2012-02-01 18:29:30 UTC
From my experience:

1. At least all the Akonadi data NEEDS reindexing.
2. Soprano 2.7.4 breaks the Akonadi indexing (so, it WILL eat your CPU forever, the solution is to use either trunk or 2.7.3) Also, 2.7.0 (Kubuntu) also doesn't work.
3. You NEED akonadi-1.7.0 to use KDE 4.8.0. If you try to use KDE 4.8.0 with akonadi-1.6.2 (in other words, if you use Kubuntu) you'll run into trouble.
Comment 20 Wolfgang Mader 2012-02-01 19:00:18 UTC
Des Comment #19 together with Comment #18 mean that I have to give up all the metadata like rating and stuff what I have up to now. So, do I have to follow Comment #3 on deleting nepomuk folders?

I already left my machine sitting for two nights "reindexing" but I was not able to see any progress. As soon as I turn on Nepomuk again, all available cpus are at 100%.
Comment 21 S. Burmeister 2012-02-01 21:43:29 UTC
(In reply to comment #19)
> From my experience:
> 
> 1. At least all the Akonadi data NEEDS reindexing.
> 2. Soprano 2.7.4 breaks the Akonadi indexing (so, it WILL eat your CPU forever,
> the solution is to use either trunk or 2.7.3) Also, 2.7.0 (Kubuntu) also
> doesn't work.
> 3. You NEED akonadi-1.7.0 to use KDE 4.8.0. If you try to use KDE 4.8.0 with
> akonadi-1.6.2 (in other words, if you use Kubuntu) you'll run into trouble.

openSUSE provides both, akonadi 1.7.0 and soprano 2.7.4 in its KDE 4.8 repo and the issue does occur nonetheless.

If it was the akonadi-data, disabling the email indexing in systemsettings would stop that re-indexing. AFAIK that does not solve the issue.
Comment 22 Graham Anderson 2012-02-01 22:42:11 UTC
(In reply to comment #19)
> From my experience: 
> 1. At least all the Akonadi data NEEDS reindexing.

If this were the case, then I would see the problem go away after a given length of time. Before I filed a comment and log on this bug I gave the process the max allowable RAM via system settings (1GiB) , and re-niced the process to give it max priority. I left it several hours and can tell you there was no change after my 4 cores thrashed away most of the night. I spent less time indexing several hundred thousand emails imported via IMAP when I moved to PIM46, it's not an indexing issue. Or at least not directly because of the existing index so far as I can guess.

> 2. Soprano 2.7.4 breaks the Akonadi indexing (so, it WILL eat your CPU forever,
> the solution is to use either trunk or 2.7.3) Also, 2.7.0 (Kubuntu) also
> doesn't work.
> 3. You NEED akonadi-1.7.0 to use KDE 4.8.0. If you try to use KDE 4.8.0 with
> akonadi-1.6.2 (in other words, if you use Kubuntu) you'll run into trouble.

I can testify to what Sven mentioned about versions. Same platform, same packages, same versions as him.

This extremely high CPU usage when there's not much else happening is very indicative of race conditions; which is why I went to the trouble of generating a log (as previously attached) using threadsanitizer. Now Threadsanitizer is not 100% accurate and it cannot map bugs to the potential race conditions it finds, but it's _very_ good. So when it light up like a christmas tree when the CPU thrashing occurs I'd be prepared to make a wager the issues it found are not entirely unrelated to this bug.
Comment 23 Graham Anderson 2012-02-01 22:51:27 UTC
Oh also, please bear in mind this could easily be something else. Even if it were found to be a data race, concurrent programming with threads is *hard*, towards the elite end of the spectrum of programmers... A little patience advised ;)
Comment 24 Tulio Magno Quites Machado Filho 2012-02-04 02:08:13 UTC
Found a lot of this in my soprano-virtuoso.log:
23:51:58 SQL Error: 42001 : SR185: Undefined procedure DB.DBA.KEY_DELETE_REPLAY.
23:51:58 SQL Error: 42001 : SR185: Undefined procedure DB.DBA.KEY_DELETE_REPLAY.
23:51:58 SQL Error: 42001 : SR185: Undefined procedure DB.DBA.KEY_DELETE_REPLAY.

They started on the same day I updated to 4.8
Comment 25 Branislav Klocok 2012-02-06 10:50:55 UTC
Following:
1. Turn off Nepomuk and kill any remaining virtuoso-t processes
2. Delete (or move) ~/.kde/share/apps/nepomuk folder
3. Delete (or move) ~/.kde/share/config/nepomuk* configuration files
4. Restart Nepomuk.
Solved the issue for me. OS:  Linux 3.1.0-1.2-desktop x86_64 System:  openSUSE 12.1 (x86_64) KDE:  4.8.00 (4.8.0 "release 462"
Comment 26 Andreas Schneider 2012-02-06 11:01:18 UTC
Could someone please backup the nepomuk files. Then follow the steps for comment #25 and if it works attach them here?
Comment 27 Branislav Klocok 2012-02-06 14:21:20 UTC
Sorry, I have deleted the old files.
Comment 28 Tulio Magno Quites Machado Filho 2012-02-06 14:36:31 UTC
(In reply to comment #26)
> Could someone please backup the nepomuk files. Then follow the steps for
> comment #25 and if it works attach them here?

I already tested this and it works, but I lose all my tags and ratings. So, I recovered everything and disabled 
Andreas, which files do you need?
My nepomuk data (~/.kde4/share/apps/nepomuk) has almost 3GiB. :-(
Comment 29 H.H. 2012-02-07 13:49:39 UTC
first constructive answer found:

http://vhanda.in/blog/2012/02/virtuoso-going-crazy-/
Comment 30 H.H. 2012-02-07 14:05:10 UTC
mhm, the query:

qdbus org.kde.nepomuk.services.nepomukqueryservice

results in

/
/nepomukqueryservice
/servicecontrol

and nontheless virtuoso-t uses high cpu ressources (70-90%) with short pauses in between)
Comment 31 S. Burmeister 2012-02-07 14:38:38 UTC
(In reply to comment #30)
> mhm, the query:
> 
> qdbus org.kde.nepomuk.services.nepomukqueryservice
> 
> results in
> 
> /
> /nepomukqueryservice
> /servicecontrol
> 
> and nontheless virtuoso-t uses high cpu ressources (70-90%) with short pauses
> in between)

Yep, virtuoso goes crazy without queries as well. Debugging becomes a bit more difficult due to that. See:

http://kdeatopensuse.wordpress.com/2011/11/09/debugging-nepomukvirtuosos-cpu-usage/
Comment 32 Johannes Huber 2012-02-07 19:46:22 UTC
Created attachment 68605 [details]
configuration diff for described workaround
Comment 33 Vishesh Handa 2012-02-08 14:12:03 UTC
*** Bug 292517 has been marked as a duplicate of this bug. ***
Comment 34 Beat Wolf 2012-02-08 14:17:18 UTC
I was asked to add my comment from a blog to this bugreport:

"it seems like i have no open queries, but virtuoso takes 13% (around 1 core on my 6 core). there are nepomukservicestub processes, on using 4% and the others 1% each.
The virtuoso gdb backtrace shows this: http://paste.kde.org/201446/
The nepomukservicestub with 4% shows this: http://paste.kde.org/201452/

I hope this helps!"
Comment 35 Denni 2012-02-09 18:25:36 UTC
I followed the comment #25 path and the problem vanished.
OS:  Linux 3.2.0-7-desktop x86_64
System:  openSUSE 12.1 (x86_64) KDE:  4.8.00 (4.8.0 "release 2")
Comment 36 jos poortvliet 2012-02-11 11:15:08 UTC
So I don't actually have any queries running. Yet Virtuoso uses between 70 and 100% cpu on my dualcore laptop. Sometimes it stops using 100% cpu for a minute or two but it goes back using 100% quickly

According mr Handa on http://vhanda.in/blog/2012/02/virtuoso-going-crazy-/ I have to use GDB to figure out what's going on.

Just to be sure, 
$ qdbus org.kde.nepomuk.services.nepomukqueryservice
returns:
/
/nepomukqueryservice
/servicecontrol

So what I did as per Vishesh' request:
(figure out nepomukstorage pid)
gdb
attach (pid)
(gdb) thread apply all backtrace

Thread 30 (Thread 0x7f93853de700 (LWP 2129)):
#0  0x00007f938cdd5d33 in select () from /lib64/libc.so.6
#1  0x00007f938eb14931 in ?? () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#4  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 29 (Thread 0x7f937ffff700 (LWP 2165)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f938eb1648f in ?? () from /usr/lib64/libQtCore.so.4
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 28 (Thread 0x7f937f5f6700 (LWP 2192)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bca598 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 27 (Thread 0x7f937edf5700 (LWP 2198)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
---Type <return> to continue, or q <return> to quit---
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 26 (Thread 0x7f937e5f4700 (LWP 2199)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 25 (Thread 0x7f937ddf3700 (LWP 2200)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 24 (Thread 0x7f937d5f2700 (LWP 2201)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 23 (Thread 0x7f937cdf1700 (LWP 2202)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 22 (Thread 0x7f935bfff700 (LWP 2205)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 21 (Thread 0x7f935b7fe700 (LWP 2208)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 20 (Thread 0x7f9353fff700 (LWP 2210)):
---Type <return> to continue, or q <return> to quit---
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 19 (Thread 0x7f935affd700 (LWP 2212)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 18 (Thread 0x7f935a7fc700 (LWP 2214)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 17 (Thread 0x7f93597fa700 (LWP 2225)):
#0  0x00007f938cdd5d33 in select () from /lib64/libc.so.6

#1  0x00007f93848b6fd9 in ?? () from /usr/lib64/virtodbc_r.so
#2  0x00007f93848bbd39 in ?? () from /usr/lib64/virtodbc_r.so
#3  0x00007f93848868b1 in ?? () from /usr/lib64/virtodbc_r.so
#4  0x00007f938488ab7d in ?? () from /usr/lib64/virtodbc_r.so
#5  0x00007f93853f66f7 in ?? () from /usr/lib64/libiodbc.so.2
#6  0x00007f93853f6a3d in SQLExecDirect () from /usr/lib64/libiodbc.so.2
#7  0x00007f9385654b1f in ?? () from /usr/lib64/soprano/libsoprano_virtuosobackend.so
---Type <return> to continue, or q <return> to quit---
#8  0x00007f9385654de6 in ?? () from /usr/lib64/soprano/libsoprano_virtuosobackend.so
#9  0x00007f938563ea8a in ?? () from /usr/lib64/soprano/libsoprano_virtuosobackend.so
#10 0x00007f9387dfebad in ?? () from /usr/lib64/kde4/nepomukstorage.so
#11 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement const&) () from /usr/lib64/libsoprano.so.4
#12 0x00007f9387e04b81 in ?? () from /usr/lib64/kde4/nepomukstorage.so
#13 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement const&) () from /usr/lib64/libsoprano.so.4
#14 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement const&) () from /usr/lib64/libsoprano.so.4
#15 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement const&) () from /usr/lib64/libsoprano.so.4
#16 0x00007f9387e2f02e in ?? () from /usr/lib64/kde4/nepomukstorage.so
#17 0x00007f9387e34dd8 in ?? () from /usr/lib64/kde4/nepomukstorage.so
#18 0x00007f9387e1a1b8 in ?? () from /usr/lib64/kde4/nepomukstorage.so
#19 0x00007f9387e278c2 in ?? () from /usr/lib64/kde4/nepomukstorage.so
#20 0x00007f9387e27e8c in ?? () from /usr/lib64/kde4/nepomukstorage.so
#21 0x00007f938ea2dd12 in ?? () from /usr/lib64/libQtCore.so.4
#22 0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#23 0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#24 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 16 (Thread 0x7f9358ff9700 (LWP 2227)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 15 (Thread 0x7f93537fe700 (LWP 2233)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 14 (Thread 0x7f9352ffd700 (LWP 2463)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 13 (Thread 0x7f9359ffb700 (LWP 3030)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 12 (Thread 0x7f9384878700 (LWP 5727)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 11 (Thread 0x7f93527fc700 (LWP 5819)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7f93513ed700 (LWP 5825)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7f9350bec700 (LWP 5830)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7f933ffff700 (LWP 6064)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7f9351bee700 (LWP 6065)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f933f7fe700 (LWP 6107)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f933effd700 (LWP 6108)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f933e7fc700 (LWP 6109)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f933dffb700 (LWP 6110)):
#0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from /usr/lib64/libQtCore.so.4
#2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
#3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7f9337fff700 (LWP 25987)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#5  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
---Type <return> to continue, or q <return> to quit---
#7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
#8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
#9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f938cddc63d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f938f078760 (LWP 2124)):
#0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
#1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
#2  0x0000003003047f59 in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#3  0x00007f938eb668ef in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#4  0x00007f938d3132de in ?? () from /usr/lib64/libQtGui.so.4
#5  0x00007f938eb36682 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#6  0x00007f938eb368d7 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib64/libQtCore.so.4
#7  0x00007f938eb3b435 in QCoreApplication::exec() () from /usr/lib64/libQtCore.so.4
#8  0x0000000000404011 in ?? ()
#9  0x00007f938cd2423d in __libc_start_main () from /lib64/libc.so.6
#10 0x0000000000404339 in _start ()
>detach
>quit

I realize I have almost no debug packages installed. I can install the ones you guys really need - all of them eats too much space on this rather small drive (30GB for the whole system, you can imagine I didn't give root that much room).

Lemme know what I can do, if anything...
Comment 37 Randy Andy 2012-02-12 17:29:41 UTC
Same problem here
Comment 38 Diego Viola 2012-02-12 17:30:45 UTC
(In reply to comment #36)
> So I don't actually have any queries running. Yet Virtuoso uses between 70 and
> 100% cpu on my dualcore laptop. Sometimes it stops using 100% cpu for a minute
> or two but it goes back using 100% quickly
> 
> According mr Handa on http://vhanda.in/blog/2012/02/virtuoso-going-crazy-/ I
> have to use GDB to figure out what's going on.
> 
> Just to be sure, 
> $ qdbus org.kde.nepomuk.services.nepomukqueryservice
> returns:
> /
> /nepomukqueryservice
> /servicecontrol
> 
> So what I did as per Vishesh' request:
> (figure out nepomukstorage pid)
> gdb
> attach (pid)
> (gdb) thread apply all backtrace
> 
> Thread 30 (Thread 0x7f93853de700 (LWP 2129)):
> #0  0x00007f938cdd5d33 in select () from /lib64/libc.so.6
> #1  0x00007f938eb14931 in ?? () from /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #4  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 29 (Thread 0x7f937ffff700 (LWP 2165)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f938eb1648f in ?? () from /usr/lib64/libQtCore.so.4
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 28 (Thread 0x7f937f5f6700 (LWP 2192)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bca598 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 27 (Thread 0x7f937edf5700 (LWP 2198)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> ---Type <return> to continue, or q <return> to quit---
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 26 (Thread 0x7f937e5f4700 (LWP 2199)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 25 (Thread 0x7f937ddf3700 (LWP 2200)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 24 (Thread 0x7f937d5f2700 (LWP 2201)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> ---Type <return> to continue, or q <return> to quit---
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 23 (Thread 0x7f937cdf1700 (LWP 2202)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 22 (Thread 0x7f935bfff700 (LWP 2205)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 21 (Thread 0x7f935b7fe700 (LWP 2208)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 20 (Thread 0x7f9353fff700 (LWP 2210)):
> ---Type <return> to continue, or q <return> to quit---
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 19 (Thread 0x7f935affd700 (LWP 2212)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 18 (Thread 0x7f935a7fc700 (LWP 2214)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 17 (Thread 0x7f93597fa700 (LWP 2225)):
> #0  0x00007f938cdd5d33 in select () from /lib64/libc.so.6
> 
> #1  0x00007f93848b6fd9 in ?? () from /usr/lib64/virtodbc_r.so
> #2  0x00007f93848bbd39 in ?? () from /usr/lib64/virtodbc_r.so
> #3  0x00007f93848868b1 in ?? () from /usr/lib64/virtodbc_r.so
> #4  0x00007f938488ab7d in ?? () from /usr/lib64/virtodbc_r.so
> #5  0x00007f93853f66f7 in ?? () from /usr/lib64/libiodbc.so.2
> #6  0x00007f93853f6a3d in SQLExecDirect () from /usr/lib64/libiodbc.so.2
> #7  0x00007f9385654b1f in ?? () from
> /usr/lib64/soprano/libsoprano_virtuosobackend.so
> ---Type <return> to continue, or q <return> to quit---
> #8  0x00007f9385654de6 in ?? () from
> /usr/lib64/soprano/libsoprano_virtuosobackend.so
> #9  0x00007f938563ea8a in ?? () from
> /usr/lib64/soprano/libsoprano_virtuosobackend.so
> #10 0x00007f9387dfebad in ?? () from /usr/lib64/kde4/nepomukstorage.so
> #11 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement
> const&) () from /usr/lib64/libsoprano.so.4
> #12 0x00007f9387e04b81 in ?? () from /usr/lib64/kde4/nepomukstorage.so
> #13 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement
> const&) () from /usr/lib64/libsoprano.so.4
> #14 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement
> const&) () from /usr/lib64/libsoprano.so.4
> #15 0x00007f938bc6bd86 in Soprano::FilterModel::addStatement(Soprano::Statement
> const&) () from /usr/lib64/libsoprano.so.4
> #16 0x00007f9387e2f02e in ?? () from /usr/lib64/kde4/nepomukstorage.so
> #17 0x00007f9387e34dd8 in ?? () from /usr/lib64/kde4/nepomukstorage.so
> #18 0x00007f9387e1a1b8 in ?? () from /usr/lib64/kde4/nepomukstorage.so
> #19 0x00007f9387e278c2 in ?? () from /usr/lib64/kde4/nepomukstorage.so
> #20 0x00007f9387e27e8c in ?? () from /usr/lib64/kde4/nepomukstorage.so
> #21 0x00007f938ea2dd12 in ?? () from /usr/lib64/libQtCore.so.4
> #22 0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #23 0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #24 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 16 (Thread 0x7f9358ff9700 (LWP 2227)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 15 (Thread 0x7f93537fe700 (LWP 2233)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> ---Type <return> to continue, or q <return> to quit---
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 14 (Thread 0x7f9352ffd700 (LWP 2463)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 13 (Thread 0x7f9359ffb700 (LWP 3030)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 12 (Thread 0x7f9384878700 (LWP 5727)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 11 (Thread 0x7f93527fc700 (LWP 5819)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> ---Type <return> to continue, or q <return> to quit---
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 10 (Thread 0x7f93513ed700 (LWP 5825)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 9 (Thread 0x7f9350bec700 (LWP 5830)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 8 (Thread 0x7f933ffff700 (LWP 6064)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 7 (Thread 0x7f9351bee700 (LWP 6065)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> ---Type <return> to continue, or q <return> to quit---
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 6 (Thread 0x7f933f7fe700 (LWP 6107)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 5 (Thread 0x7f933effd700 (LWP 6108)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 4 (Thread 0x7f933e7fc700 (LWP 6109)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 3 (Thread 0x7f933dffb700 (LWP 6110)):
> #0  0x00007f938e7a6e6c in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x00007f938ea3aa6b in QWaitCondition::wait(QMutex*, unsigned long) () from
> /usr/lib64/libQtCore.so.4
> #2  0x00007f938ea2dddf in ?? () from /usr/lib64/libQtCore.so.4
> #3  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #4  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #5  0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 2 (Thread 0x7f9337fff700 (LWP 25987)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #5  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938ea37537 in QThread::exec() () from /usr/lib64/libQtCore.so.4
> ---Type <return> to continue, or q <return> to quit---
> #7  0x00007f9387bb7758 in ?? () from /usr/lib64/libsopranoserver.so.1
> #8  0x00007f938ea3a55b in ?? () from /usr/lib64/libQtCore.so.4
> #9  0x00007f938e7a2f05 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007f938cddc63d in clone () from /lib64/libc.so.6
> 
> Thread 1 (Thread 0x7f938f078760 (LWP 2124)):
> #0  0x00007f938cdd3523 in poll () from /lib64/libc.so.6
> #1  0x0000003003047a98 in ?? () from /usr/lib64/libglib-2.0.so.0
> #2  0x0000003003047f59 in g_main_context_iteration () from
> /usr/lib64/libglib-2.0.so.0
> #3  0x00007f938eb668ef in
> QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
> from /usr/lib64/libQtCore.so.4
> #4  0x00007f938d3132de in ?? () from /usr/lib64/libQtGui.so.4
> #5  0x00007f938eb36682 in
> QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #6  0x00007f938eb368d7 in
> QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from
> /usr/lib64/libQtCore.so.4
> #7  0x00007f938eb3b435 in QCoreApplication::exec() () from
> /usr/lib64/libQtCore.so.4
> #8  0x0000000000404011 in ?? ()
> #9  0x00007f938cd2423d in __libc_start_main () from /lib64/libc.so.6
> #10 0x0000000000404339 in _start ()
> >detach
> >quit
> 
> I realize I have almost no debug packages installed. I can install the ones you
> guys really need - all of them eats too much space on this rather small drive
> (30GB for the whole system, you can imagine I didn't give root that much room).
> 
> Lemme know what I can do, if anything...

I've had/seen this issue as well.
Comment 39 Sebastian Trueg 2012-02-13 16:33:09 UTC
Created attachment 68763 [details]
Patch against kdepim-runtime 4.8

I need some testers: the attached patch adds throttling to the Akonadi Nepomuk feeder. This should lower the CPU load while you are working. As soon as you went away from the system it goes up to full speed. This is the same approach that worked wonders with the file indexer. There might be additional improvements but this is the first step.
Comment 40 H.H. 2012-02-13 17:10:34 UTC
@Sebastian Trueg: thanks, this is a good addition in general, but I doubt, that it solves this bug, but instead only mellows the symptoms.

My impression is, that the load really never stops (the work seems to be never done - some infinite loop?).

a bit off topic: another thing I noticed and wondered about: I moved a (big) local mail folder from on folder to another folder. This normally was (kmail1) a simple filesystem-move-operation (moving maildir). But now kmail needs a few minutes with high akonadi-cpu-loads, and I have to wait this time, to see the view updated. Why? What does happen there?
Comment 41 Ralph Moenchmeyer 2012-02-13 18:10:16 UTC
(In reply to comment #39)
> Created an attachment (id=68763) [details]
> Patch against kdepim-runtime 4.8

> This is the same approach
> that worked wonders with the file indexer. There might be additional
> improvements but this is the first step.

It will of course be an improvement to throttle Nepomuk. 

However, the bug seems to have basically different causes as the cpu consuming behaviour disappears at least for some users when deleting the Nepomuk directory and the Nepomuk configuration files (see comment #3 and #18). 

In my case the indexer now behaves perfectly normal - just some indexing when Kmail loads new mails from an IMAP server and this with very short and very low cpu consumption. And it behaves in a way already adaptively.   

So, there might go something wrong during and after an update from KDE 4.7 to KDE 4.8 - maybe a mismatch between the new program version and the existing configuration files ? In addition in most cases where there still is a problem Nepomuk/virtuoso do not seem to stop! 

I shall try to install Opensuse 12.7 without KDE, then add a clean KDE 4.8 plus attach Kmail to an IMAP-server with several GByte of mails and the see what happens. If that works it may be a hint that the problem has to do with updates and configuration files from older versions. It would not be the first time .... 
I'll come back with the results later ....
Comment 42 Ralph Moenchmeyer 2012-02-13 20:06:07 UTC
Results of what I suggested in comment #41, i.e. fresh install of Opensuse 12.1 with all updates + fresh install of KDE 4.8 (no update from KDE 4.7.4) + new user + Kmail-connection to an imap-server with around 7 GByte of mails distributed across several 100 email folders. The client has a quadcore cpu and a fast raid array. The server was under minor load:

Kmail and akonadi load the basic informatioon about the mails rather quickly - maybe within 3 minutes. Very good !
Then nepomuk started its business at 19:49. The cpu load is as follows:  

Percentages are given relative to the power of only one (!) core - the load, however was rather equally distributed over all 4 cores. So, to get the total load over all processor cores you have to divide by a factor of 4 !) 

virtuoso-t: 68 % (of one core) 
nepomukservicestub: 30% (of one core)
kontact: 11 % (of one core)
akonadi_nepomuk_email_feeder: 9%

Wit that Nepomuk was indexing a while - until 20:02. I.e. it took only around 13 minutes to do the job. During the time nepomuk behaved perfectly adaptive ! I.e. using the mouse or another application led to a sharp drop in nepomuks activity. 

At 20:03 the cpu load due to kontact, akonadi, nepomuk, virtuoso dropped to zero (< 0.1 % ) and remained there ! 

I sent myself a bunch of mails afterwards - when Kmail/akonadi updated from the IMAP server this led to a minimum activity of indexing - almost not noticeable.  

This sound like an almost perfect and performant behaviour. So, from that I really would guess that all the problems described for this bug are due to configuration inconsistencies which occur during/after updates from KDE 4.7 to KDE 4.8. Which I experienced myself ... 

These inconsistencies - at least in my case - could be remedied by deleting the nepomuk directory and nepomuk configuration files as described already in comment #3. 

I want to add something to my own comment#18: A deletion of the files  
~.kde4/share/config/akonadi_nepomuk_email_feederrc 
~.kde4/share/config/akonadi_nepomuk_contact_feederrc 
~.kde4/share/config/akonadi_nepomuk_calendar_feederrc 
is completely unnecessary for resolving the nepomuk problem after updating to KDE 4.8. 
I tested this for another user which I upgraded from KDE 4.7. I meanwhile added these files which config directives for an initial indexing to my own account again - no problems. The process akonadi_nepomuk_email_feeder works as expected.
Comment 43 Ralph Moenchmeyer 2012-02-13 22:03:43 UTC
Sorry, I messed it up. 
Tested on the wrong virtual machine. 

Comment #42 is invalid. 

Actually I tested a 4.7.2 installation and there everything is of course perfect. . 

So just forget about comment #42.

Sorry, sorry, .... for the confusion ..... 

I have to setup the 4.8 test again - but tomorrow ....
Comment 44 Sebastian Trueg 2012-02-14 09:12:32 UTC
Some technical explanations: The feeder in its current state (without my patch) is never fully suspended. It makes a distinction between the initial indexing/updating of all items and newly added or changed items. The indexing of the latter is never stopped, even if the user is doing something. My assumption about it is (and I did not test this yet) that when starting with a clean Nepomuk db most items are indexed through the "low priority queue" which is actually suspended completely when the user is active. However, if there is data in Nepomuk already the items go through the "high priority queue" which means unthrottled indexing with very high CPU load.

Thus, my patch which actually throttles both queues should make a big difference.
Comment 45 Alex 2012-02-14 09:37:27 UTC
In reply to #44
> Some technical explanations: The feeder in its current state (without my patch)
> is never fully suspended. It makes a distinction between the initial
> indexing/updating of all items and newly added or changed items. The indexing
> of the latter is never stopped, even if the user is doing something. My 
> assumption about it is (and I did not test this yet) that when starting with a
> clean Nepomuk db most items are indexed through the "low priority queue" which
> is actually suspended completely when the user is active. However, if there is
> data in Nepomuk already the items go through the "high priority queue" which
> means unthrottled indexing with very high CPU load.

> Thus, my patch which actually throttles both queues should make a big
> difference.

While your patch might help to cure the symptoms, it does not fix the problem. I deleted my nepomuk configuration and database and I do see the behaviour you described above (0% cpu when I'm doing something, 100% after a few seconds of inactivity)

However Nepomuk never stops. Even if I let my system run for days, Nepomuk will still use 100% cpu when there is no user activity. Strangely enough qdbus does not report any active queries and the only way to stop Nepomuk from using the cpu is disabling it completely.

I can disable e-mail indexing and file indexing, but Nepomuk will still consume cpu time. Only if I disable Nepomuk as well, cpu usage finally drops. That's what convinces me that the whole problem has nothing to do with actual indexing being done. If neither file indexing nor e-mail indexing is active, Nepomuk should stop its activity sooner or later as there is nothing to do.
Comment 46 Beat Wolf 2012-02-14 10:05:04 UTC
throttling might actually be worse overall, because there will be no more bugreports, but nepomuk will just use a few % of the cpu forever, without making the system unusable.
Comment 47 Sebastian Trueg 2012-02-14 10:47:21 UTC
*** Bug 281653 has been marked as a duplicate of this bug. ***
Comment 48 Sebastian Trueg 2012-02-14 11:02:20 UTC
(In reply to comment #45)
> While your patch might help to cure the symptoms, it does not fix the problem.
> I deleted my nepomuk configuration and database and I do see the behaviour you
> described above (0% cpu when I'm doing something, 100% after a few seconds of
> inactivity)
> 
> However Nepomuk never stops. Even if I let my system run for days, Nepomuk will
> still use 100% cpu when there is no user activity. Strangely enough qdbus does
> not report any active queries and the only way to stop Nepomuk from using the
> cpu is disabling it completely.

This is not about user queries reported by the query service. Forget those.

> I can disable e-mail indexing and file indexing, but Nepomuk will still consume
> cpu time. Only if I disable Nepomuk as well, cpu usage finally drops. That's
> what convinces me that the whole problem has nothing to do with actual indexing
> being done. If neither file indexing nor e-mail indexing is active, Nepomuk
> should stop its activity sooner or later as there is nothing to do.

Disabling email indexing does not help because it is a no-op. The Akonadi feeder was renamed but the config settings were not updated accordingly. Thus, you simply cannot disable the Akonadi feeder without doing it manually through akonadiconsole. Just try that and you will see that it is in fact the problem.
Comment 49 Alex 2012-02-14 11:49:33 UTC
(In reply to comment #48)
> Disabling email indexing does not help because it is a no-op. The Akonadi
> feeder was renamed but the config settings were not updated accordingly. Thus,
> you simply cannot disable the Akonadi feeder without doing it manually through
> akonadiconsole. Just try that and you will see that it is in fact the problem.

Ok, I'll try that when I'm back at home. However I do not know how to disable the akonadi feeder using the akonadiconsole. Will a simple "akonadictl stop" suffice as well?

If this is all a simple configuration problem? How comes that Nepomuk never stops? I only have a few hundred mails - so if it is the Akonadi feeder, why does it run for days without coming to an end? I would expect that scanning a few hundred mails (~300) would take only a few minutes.
Comment 50 Sebastian Trueg 2012-02-14 12:42:34 UTC
(In reply to comment #49)
> (In reply to comment #48)
> > you simply cannot disable the Akonadi feeder without doing it manually through
> > akonadiconsole. Just try that and you will see that it is in fact the problem.
> 
> Ok, I'll try that when I'm back at home. However I do not know how to disable
> the akonadi feeder using the akonadiconsole. Will a simple "akonadictl stop"
> suffice as well?

yes, sure, that is fine, too.

> If this is all a simple configuration problem? How comes that Nepomuk never
> stops? I only have a few hundred mails - so if it is the Akonadi feeder, why
> does it run for days without coming to an end? I would expect that scanning a
> few hundred mails (~300) would take only a few minutes.

That is another issue I will look into next.
Comment 51 Wolfgang Mader 2012-02-14 12:59:41 UTC
Created attachment 68790 [details]
Screenshot of akonani console
Comment 52 Wolfgang Mader 2012-02-14 13:02:05 UTC
In response do Comment #48.
Are you refering with disabling Akonadi feeder using akonadiconsole to gui tool (see attachement in Comment #51). If so, even with KDE 4.7 the Akonadi Nepomuk Feed was suspended due to a busy system all the time. I never managed to get it running for more than a minute. If I set this feeder of "Offline" nothing changes on the cpu usage side.

A second piece. I upgraded another machine from 4.7 -> 4.8, and the user was never using any rating or other semantic stuff, but kmail. Therefore, I would assume, that appart from automated data nothing was in the nepomuk database (Sorry if I mix up terminology, I hope you get the idea). For this update everything went smooth. virtuoso-t was active with around 40% CPU for lets say 3 minutes and then it went back to 0% without comming up again.
Comment 53 Sebastian Trueg 2012-02-14 13:16:40 UTC
(In reply to comment #52)
> In response do Comment #48.
> Are you refering with disabling Akonadi feeder using akonadiconsole to gui tool
> (see attachement in Comment #51). If so, even with KDE 4.7 the Akonadi Nepomuk
> Feed was suspended due to a busy system all the time. I never managed to get it
> running for more than a minute. If I set this feeder of "Offline" nothing
> changes on the cpu usage side.

That is exactly what I said above. Setting it offline will not change anything with respect to the "high priority" queue. That is why you need to remove it or stop Akonadi to see the effect.
Comment 54 Alejandro Nova 2012-02-14 13:26:34 UTC
The Akonadi-Nepomuk Feeder can't be persistently removed. It gets automatically added if I close and restart my session.

BTW, can you make the option to Disable Mail Indexing actually work?
Comment 55 Sebastian Trueg 2012-02-14 13:47:02 UTC
(In reply to comment #54)
> BTW, can you make the option to Disable Mail Indexing actually work?

Sure, I will do that.
Comment 56 Ralph Moenchmeyer 2012-02-14 18:11:14 UTC
I now tested correctly what I did wrong yesterday. 

I installed an Opensuse 12.1 (x86_64) system from scratch without any KDE. So no KDE 4.7.2. No upgrades to the system, either. 

I then installed KDE 4.8 from the following repository:
http://download.opensuse.org/repositories/KDE:/Release:/48/openSUSE_12.1/

Afterwards I connected to an IMAP-Server with Gigabytes of mails.

Believe it or not: 
Indexing started - but it came to an end - and we are talking about a time much less than an hour. Not so fast as with KDE 4.7.2 but within reasonable limits.    

The system and the indexers (Nepomuk, strigi) behave smoothly since then.  

Whatever this means ....
Comment 57 Alex 2012-02-14 19:54:00 UTC
(In reply to comment #48)
> Disabling email indexing does not help because it is a no-op. The Akonadi
> feeder was renamed but the config settings were not updated accordingly. Thus,
> you simply cannot disable the Akonadi feeder without doing it manually through
> akonadiconsole. Just try that and you will see that it is in fact the problem.

I just tried it and you are absolutely right. CPU load drops to zero and stays there as soon as I disable Akonadi completely. Now the only question remains is why Akonadi continuously feeds Nepomuk without ever coming to an end (at least for me).
Comment 58 Sebastian Trueg 2012-02-15 13:09:35 UTC
Created attachment 68819 [details]
Patch against kdepim-runtime

This updated patch addresses a few issues:
1. Indexing is throttled during user activity
2. When put offline the feeder is actually offline and does not do anything anymore
3. The endless indexing problem is fixed. This was caused by the fact that the feeder would reindex everything at startup until it finished the initial index once. That means that you had to leave your computer alone until the indexing was finished. Otherwise it would just start over again after the next reboot. This was caused by the fact that user interaction would stop the initial indexing.
4. Even updating all collections will not reindex each item. Only those which actually changed or need to be updated due to an improved indexer are re-indexed.
5. Less queries are used to check if an item needs to be re-indexed.

The only downside is that all items have to be reindexed once to contain the new metadata to track changes.

Please test this patch (or branch throttleNepomukFeeder) so I can push this for KDE 4.8.1.
Comment 59 Jörg von Frantzius 2012-02-15 14:41:08 UTC
Do 3., 4. and 5. apply to PIM entries only, or also to Strigi file indexing? If they apply to Strigi also, this would be just great!
Comment 60 Sebastian Trueg 2012-02-15 15:48:21 UTC
(In reply to comment #59)
> Do 3., 4. and 5. apply to PIM entries only, or also to Strigi file indexing? If
> they apply to Strigi also, this would be just great!

the file indexing does not have these problems. The only problem which rests is that some files fail to get indexed due to bugs in strigi. Whenever such a file is found it should be attached to a bug report.
Comment 61 Anders Lund 2012-02-15 16:20:39 UTC
Onsdag den 15. februar 2012 15:48:22 Sebastian Trueg skrev:
> the file indexing does not have these problems. The only problem which rests
> is that some files fail to get indexed due to bugs in strigi. Whenever such
> a file is found it should be attached to a bug report.

Are there instructions somewhere for finding out which file?

I experience that nepomuk chokes and stops working during indexing. Then 
search (krunner, dolphin) and many other features appears as broken, and ime 
it is very involved to recover - nepomuk must be stopped, virtuoso-t killed if 
it does not stop, nepomukserver killed if it does not stop, nepomuk restarted, 
and all applications using it restarted (the latter includes krunner, plasma-
desktop and kactivitymanager, which i have not yet found out how to restart) - 
the easyes is to log out , check for running leftovers from nepomuk, and log 
back in.
Comment 62 Randy Andy 2012-02-15 17:11:30 UTC
Sebastian, I applied your patch shortly.

Virtuoso-t's CPU load is pretty low now (<10%), also during indexing.
File-Search in Dolphin works although the indexing hasn't been finished, but broke here one time the file indexing finally.
Just log out and in once again and it starts working again. Seems to work stable now, also after lots of trials to brake it again through dolphins file search.

First time that CPU Fan speed is low here, as I expect it on my Quad-Core with 8GB RAM.

By now, your patch seems to work fine for me. Well done!

I'll let you know if and when indexing my 4,5TB drives are finished.

Best regards, Andy.
Comment 63 Alejandro Nova 2012-02-15 17:16:02 UTC
Chakra Linux, KDE 4.8 with kdepim-runtime from the throttleNepomukFeeder branch.

My akonadi-nepomuk-feederrc was with compatlevel=3. I erased the compatlevel and disabled the idle detection, to get a pure result. My preliminary tests show me that Nepomuk is behaving in a way I've never seen before, and this is a good way. 

In computer 1 (32 bits, Core2 Duo E4400, 2 GB RAM), Virtuoso uses between 10 and 20% of CPU always, when I forced a reindex through the Akonadi Console. There IS activity in the Akonadi Console display, and the Akonadi-Nepomuk Feeder is indexing, so, it is working.

Computer 2 (64 bit, TurionX2 TL-34, 4 GB RAM): Virtuoso is using more CPU, but it doesn't reach 100% (I've checked my mail for years using that machine, so Akonadi has more data cached). Also, the CPU usage is even. Before, a core was always being used while the other one was resting. Now I see both cores are working, status updates, and even some low points in CPU usage. All this with the idle detection disabled, so, this Akonadi-Nepomuk Feeder patches seem to be working.
Comment 64 Alejandro Nova 2012-02-15 17:48:28 UTC
Computer 2 results were being distorted by bug 292838. I made the symlink referenced in the bug and Virtuoso started to behave like it did in Computer 1.

The patches are working.
Comment 65 Alejandro Nova 2012-02-15 18:01:16 UTC
*** Bug 291948 has been marked as a duplicate of this bug. ***
Comment 66 Sebastian Trueg 2012-02-15 18:38:17 UTC
Thank you guys for testing. I am happy my fixes had the desired effect. I suppose we can close this one as fixed as soon as the maintainer of the code gives his OK. That should be my the end of the week. So expect this fix to be part of 4.8.1.
Comment 67 Wolfgang Mader 2012-02-15 19:58:32 UTC
Great news. Thank you Sebastian for your work.
Comment 68 Will Stephenson 2012-02-15 20:15:26 UTC
Testing the patch vs 4.8.0.

+     /**
+       * Like ReducedSpeed delays are used but they are much longer
+       * to get even less CPU and IO load. This mode is used for the
+       * first 2 minutes after startup to give the KDE session manager
+       * time to start up the KDE session rapidly.
+       */
+      SnailPace
+  };

and 

+    else if ( speed == SnailPace ) {
+        highPrioQueue.setProcessingDelay(s_snailPaceDelay);
+        setOnline(false);
+    }

Taken with Sebastian's point 2 in comment 58 that Offline means really do nothing, does this mean that even the high prio queue won't do anything during SnailPace speed?
Comment 69 Will Stephenson 2012-02-15 21:16:55 UTC
I'm running with DisableIdleDetection=true here and while CPU usage is acceptable during indexing (10-20% at most) indexing performance seems very slow - progress updates are minutes apart.

I also noticed your patch changes the KIdleTime timeout from 10s to 120s. As I understand the IndexingSpeed comment, this should be to give the system time to complete login, but then I'd expect you to set the timeout back to something lower after that so it can get on with indexing sooner after the user becomes idle.
Comment 70 Blackpaw 2012-02-15 21:19:41 UTC
Excellent news.

Is there a way to force a reindex all, short of deleting the db?
Comment 71 Sebastian Trueg 2012-02-15 22:17:51 UTC
(In reply to comment #68)
> Taken with Sebastian's point 2 in comment 58 that Offline means really do
> nothing, does this mean that even the high prio queue won't do anything during
> SnailPace speed?

No, that is not what it means. The high prio queue will do nothing when the feeder is offline. SnailPace is never activated at the moment. I just left it in for completeness.
Comment 72 Sebastian Trueg 2012-02-15 22:21:10 UTC
(In reply to comment #69)
> I'm running with DisableIdleDetection=true here and while CPU usage is
> acceptable during indexing (10-20% at most) indexing performance seems very
> slow - progress updates are minutes apart.

That is because I did not think of the idle detection disable option and set the default to ReducedSpeed. In the spirit of the old code (which disabled the low prio queue altogether) this means a SnailPace for the low prio queue. I am perfectly fine with changing that to throttle both queues the same way.

> I also noticed your patch changes the KIdleTime timeout from 10s to 120s. As I
> understand the IndexingSpeed comment, this should be to give the system time to
> complete login, but then I'd expect you to set the timeout back to something
> lower after that so it can get on with indexing sooner after the user becomes
> idle.

Has nothing to do with login. 10s is just too little IMHO. 120s is what I use in the file indexer and it proved sensible. 10s means it will go to full speed when you just rest your eyes but want to continue to work 10s later.
Since the indexing never actually stops 120s make way more sense.
If the kdepim team and the feeder maintainer disagree - fine by me. I just fixed all the bugs.
Comment 73 Sebastian Trueg 2012-02-15 22:22:01 UTC
(In reply to comment #70)
> Is there a way to force a reindex all, short of deleting the db?

There is no need. Everything will get reindexed anyway since all the indexed items are missing the additional metadata this patch introduces.
Comment 74 Yngve Levinsen 2012-02-17 13:10:18 UTC
For what its worth, I can confirm similar problems on my 4 year old Macbook running Chakra Linux, using KDE 4.8 (stable) and Linux 3.2 series. virtuoso-t is using around 150% of cpu (dual core) even while I am using it. That starts the fan which makes quite a bit of noise, and the computer feels slow. Only solution I've found so far is to turn off desktop search completely and restart the machine.

$ uname -a
Linux yngve-chakra 3.2-CHAKRA #1 SMP PREEMPT Sun Jan 29 14:47:11 UTC 2012 x86_64 Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz GenuineIntel GNU/Linux

On my quad core desktop machine running the same OS it poses far less of a problem. But KDE shouldn't be for desktops only I'd say.. ;)
Comment 75 Sebastian Trueg 2012-02-17 14:09:13 UTC
*** Bug 293641 has been marked as a duplicate of this bug. ***
Comment 76 Sebastian Trueg 2012-02-17 14:09:56 UTC
Based on the feedback to my patches I consider this bug as fixed in 4.8.1
Comment 77 Diego Viola 2012-02-17 14:26:20 UTC
Thanks to everyone who have worked on this, KDE rocks. :-)
Comment 78 Will Stephenson 2012-02-28 15:22:55 UTC
Reopening as we found more ways to eat CPU.
Comment 79 Alejandro Nova 2012-02-28 22:41:59 UTC
Can you make this a metabug? This bug must depend on every bug making Nepomuk actually eat more CPU than it should.
Comment 80 Sebastian Trueg 2012-03-08 19:02:32 UTC
Created attachment 69386 [details]
Patch against kde-runtime (branch KDE/4.8)

This is the patch I described in http://trueg.wordpress.com/2012/03/07/nepomuk-gives-back-your-cpu-cycles/
It improves the performance of the resource identification significantly and hopefully once and for all solves the problem of endless CPU hogging.
Comment 81 Anders Lund 2012-03-08 19:50:59 UTC
With KDE 4.8.1, akonadi have stopped invalidating nepomuk, and I can run file indexing and use nepomuk search in dolphin and krunner again,
I have enabled mail indexing to see if it would work, and it appears to be better than in kde 4.8.0, indexing the backlog of messages appears to work without making the system completely unusable. So there is a clear improvement in kde 4.8.1.
Comment 82 Guillaume DE BURE 2012-03-08 21:52:12 UTC
Still have the issue here in 4.8.1 (archlinux), after a cleanup of my database (rm -rd .kde4/**/*nepomuk*). Looks like the faulty quey was :

qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1 queryString
select distinct ?r ?reqProp1 (bif:concat(bif:search_excerpt(bif:vector('guillaume.debure@gmail.com'), ?v4))) as ?_n_f_t_m_ex_ where { { ?r <http://akonadi-project.org/ontologies/aneo#akonadiItemId> ?reqProp1 . ?r <http://www.semanticdesktop.org/ontologies

Tried to close the query with :
qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1 close

But the CPU went even higher ! Had to kill -9 the virtuoso-t processes to stop it but it crashed the plasma activity manager...

If you need anything more, just ask !
Comment 83 S. Burmeister 2012-03-08 22:03:17 UTC
If you see any searches in kmail's folder tree, try to remove them.
Comment 84 Guillaume DE BURE 2012-03-08 22:16:57 UTC
Indeed, there was a "Last Search" in kmail's tree. Removed it, here's hoping it will fix it :)
Comment 85 Michael Reiher 2012-03-12 21:37:06 UTC
I just updated to 4.8.1 (KUbuntu Oneiric packages) and still have virtuoso-t using up CPU. This is a bit frustrating I have to say... 

I have sometimes several virtuoso-t threads using ~150% CPU on this dual core machine and mostly only a single thread using up an entire core. 

The akonadi_nepomuk_feeder entry in akonadiconsole says: "System busy, indexing suspended." and when the system is idle, something like "nothing to index". So this looks good to me.

I tried stopping Akonadi, but virtuoso was still using up CPU. 
I disabled Nepomuk in System Settings, but virtuoso was still using up CPU.
Then I stopped nepomukserver via dbus quit, and then virtuoso stopped using CPU.
I reenabled Nepomuk in System Settings (File and Email indexing enabled) and virtuso was still not using CPU.
But when I restarted Akonadi virtuoso started eating CPU again.
So it seems stopping Akonadi doesn't stop virtuoso eating CPU, but starting it triggers the CPU eating.

There is nothing in the nepomuk/repository/main/data/virtuosobackend/soprano-virtuoso.log.

No idea if this has something to do with it, but in .xsession-errors I see:

akonadi_nepomuk_feeder(2127) ItemQueue::removeDataResult: "Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken."
Comment 86 Alejandro Nova 2012-03-13 03:02:19 UTC
If you don't have at least Akonadi 1.7.0 and Soprano 2.7.3, please, file (another) bug in Kubuntu.
Comment 87 Sebastian Trueg 2012-03-13 08:47:12 UTC
Git commit 754275eda610dce1160286a76339353097d8764c by Sebastian Trueg.
Committed on 09/03/2012 at 17:17.
Pushed by trueg into branch 'KDE/4.8'.

Backport from nepomuk-core: improved performance on res identification.
FIXED-IN: 4.8.2

M  +52   -23   nepomuk/services/backupsync/lib/resourceidentifier.cpp

http://commits.kde.org/kde-runtime/754275eda610dce1160286a76339353097d8764c
Comment 88 Sebastian Trueg 2012-03-13 08:48:29 UTC
Git commit ab1f42b346489cd0681c68072d089217bcc5c6c0 by Sebastian Trueg.
Committed on 09/03/2012 at 17:17.
Pushed by trueg into branch 'master'.

Backport from nepomuk-core: improved performance on res identification.
FIXED-IN: 4.8.2

M  +52   -23   nepomuk/services/backupsync/lib/resourceidentifier.cpp

http://commits.kde.org/kde-runtime/ab1f42b346489cd0681c68072d089217bcc5c6c0
Comment 89 Michael Reiher 2012-03-13 12:27:18 UTC
(In reply to comment #86)
> If you don't have at least Akonadi 1.7.0 and Soprano 2.7.3, please, file
> (another) bug in Kubuntu.

This is what I have installed:

akonadi-server                        1.7.0-0ubuntu1~oneiric1~ppa2 
libsoprano4                           2.7.4+dfsg.1-0ubuntu0.1 
soprano-daemon                        2.7.4+dfsg.1-0ubuntu0.1
virtuoso-minimal                      6.1.3+dfsg1-1ubuntu1
virtuoso-opensource-6.1-bin           6.1.3+dfsg1-1ubuntu1
virtuoso-opensource-6.1-common        6.1.3+dfsg1-1ubuntu1
Comment 90 Søren Holm 2012-03-23 21:37:32 UTC
Iøm running KDE 4.8.1. Virtuoso-t is not really painfull anymore but it's just anoying. Currently it has indexed around 30000 files. With around 1 file pr. second this amounts to 8 hours at least. Virtuoso-t averages at around 20% cpu-utilization. Somehow it seems to never stop.

Disabling file-indexing and email-indexing improves the situation a lot. 

The case is that i *realy* would like email-indexing but virtuoso-t still uses a lot of cpu just by doing that. kontact and virtuoso-t periodicaly hog 1 core each.
Comment 91 Sebastian Trueg 2012-03-30 08:14:59 UTC
*** Bug 296372 has been marked as a duplicate of this bug. ***
Comment 92 Franz Trischberger 2012-04-05 16:28:33 UTC
kde-4.8.2, at the moment virtoso startet spinning ;)
This query is active:

qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query23 queryString

prefix nco:<http://www.semanticdesktop.org/ontologies/2007/03/22/nco#>SELECT DISTINCT ?person WHERE {   graph ?g {     ?person <http://akonadi-project.org/ontologies/aneo#akonadiItemId> ?itemId .     ?person a nco:PersonContact ;             nco:hasEmailAddress ?email .     ?email nco:emailAddress "newsletter@kopp-verlag.de"^^<http://www.w3.org/2001/XMLSchema#string> .   } }

But had another hang of kontact today, where no query was active.
Comment 93 Graham Anderson 2012-04-05 17:09:05 UTC
(In reply to comment #92)
> kde-4.8.2, at the moment virtoso startet spinning ;)

For what it's worth, I noticed virtuoso_t doing some work for a couple of hours last night after I updated to 4.8.2 ( I had disabled it for 4.8.1) however it has since stopped working so hard and is behaving quite normally. I would point out that while it was working, the size of my nepomuk db was changing and it's likely it was indexing emails during that time. Myabe you can observe the same thing?

I had a lot of emails not in my db since it had been disabled, maybe you are in the same boat? In which case try to be patient and let nepomuk finish its index. In any case while it's working the new behaviour is to not hog all the CPU, the CPU usage will jump if you are idle for some minutes and then go lower again if you do more activity.
Comment 94 Unknown 2012-04-12 18:33:52 UTC
I don't know exactly what's the summary, if it can be fully fixed in 4.8 or just in 4.9, but in KDE 4.8.2 this bug is still definitely here.

I watched a movie and shortly after that I noticed that one core is under 100% load.

OS: openSUSE 11.2
Akonadi: 1.7.1
Soprano: 2.7.5

The only workaround I know is:
watch -n 1 "pkill -9 virtuoso-t"

If it occurred in the future, what commands should I issue to identify the problem?
Comment 95 Unknown 2012-04-14 12:15:31 UTC
The backtrace of the virtuoso-t process when was using 100% CPU:

#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
#1  0x00000000007ecbf5 in semaphore_enter (sem=0x1109490) at sched_pthread.c:932
#2  0x00000000004d1d6e in lt_wait_until_dead (lt=<optimized out>) at rltrx.c:1280
#3  0x00000000004b70eb in cpt_rollback (may_freeze=0) at neodisk.c:174
#4  0x00000000004bab0b in dbs_checkpoint (log_name=0x0, shutdown=0) at neodisk.c:1879
#5  0x0000000000591ec2 in sf_makecp (log_name=0x0, trx=0x0, fail_on_vdb=<optimized out>, shutdown=0) at sqlsrv.c:2713
#6  0x0000000000592141 in sf_make_auto_cp () at sqlsrv.c:2673
#7  0x000000000044c1d2 in main (argc=5, argv=0x110c760) at viunix.c:805

KMail and akonadiconsole were frozen (didn't respond to anything), so I couldn't get any useful information except the one above.
After killing it, KMail rapidly recovered.
Comment 96 Jirka Klimes 2012-04-16 14:52:55 UTC
Created attachment 70429 [details]
3 bactraces of virtuoso_t when it consumes ~100% of CPU

This is another "me too" for virtuoso_t eating too much CPU.

I am not sure about KDE 4.8.2, but 4.8.1 is *not* definitely fixed.
I am on Fedora 16 (KDE 4.8.1):
$ rpm -q virtuoso-opensource
virtuoso-opensource-6.1.4-4.fc16.x86_64
$ rpm -q kdelibs
kdelibs-4.8.1-3.fc16.x86_64

I followed http://vhanda.in/blog/2012/02/virtuoso-going-crazy-/

jklimes@gromit ~$  qdbus org.kde.nepomuk.services.nepomukqueryservice
/
/nepomukqueryservice
/nepomukqueryservice/query1
/servicecontrol
jklimes@gromit ~$ qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1 queryString
select distinct ?r ?reqProp1 (bif:concat(bif:search_excerpt(bif:vector('hot'), ?v2))) as ?_n_f_t_m_ex_ where { { ?r <http://akonadi-project.org/ontologies/aneo#akonadiItemId> ?reqProp1 . ?r <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#isPartOf> <nepomuk:/res/3354be4b-ee50-4fb2-8f7e-a6e0097e9309> . ?r <http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#messageSubject> ?v2 . FILTER(bif:contains(?v2, "'hot'")) . ?r a <http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#Email> . } . ?r <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#userVisible> ?v1 . FILTER(?v1>0) . }
jklimes@gromit ~$ qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1 close

jklimes@gromit ~$ qdbus org.kde.nepomuk.services.nepomukqueryservice
/
/nepomukqueryservice
/servicecontrol
jklimes@gromit ~$

Closing the query didn't help.

Please find virtuoso_t backtrace in the attachment (there are three backtaces in the file.)
Comment 97 Wolfgang Mader 2012-04-16 15:08:39 UTC
I still experience some flavour of this bug, but only rearly. I believe it is connected to indexing mails. Whenever it happens KMail UI is frozen, and virtuoso_t takes 100% of both cores, but the 'Desktop Search File Indexing' system tray app does not show any activity. 

My system runs:
Arch Linux
virtuoso 6.1.4-2
kde 4.8.2-1

Sorry for the sparse information, but this is all I have right now.
Comment 98 Diego Viola 2012-04-16 15:23:14 UTC
I don't have this issue anymore, but then again I have Nepomuk and Strigi disabled.

Thanks to everyone working on this issue.
Comment 99 Unknown 2012-04-22 20:33:42 UTC
Please, reopen this ticket, because this issue hasn't been fixed yet.

On KDE 4.8.2:
$> qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1
method void org.kde.nepomuk.Query.close()
signal void org.kde.nepomuk.Query.entriesRemoved(QDBusRawType::a(sda{s(isss entries)
signal void org.kde.nepomuk.Query.entriesRemoved(QStringList entries)
signal void org.kde.nepomuk.Query.finishedListing()
method bool org.kde.nepomuk.Query.isListingFinished()
method void org.kde.nepomuk.Query.list()
method void org.kde.nepomuk.Query.listen()
signal void org.kde.nepomuk.Query.newEntries(QDBusRawType::a(sda{s(isss entries)
method QString org.kde.nepomuk.Query.queryString()
signal void org.kde.nepomuk.Query.resultCount(int count)
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface_name, QString property_name)
method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface_name)
method void org.freedesktop.DBus.Properties.Set(QString interface_name, QString property_name, QDBusVariant value)
method QString org.freedesktop.DBus.Introspectable.Introspect()

$> qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1 queryString
select distinct ?r ?reqProp1 (bif:concat(bif:search_excerpt(bif:vector('ez','az','els'), ?v8),bif:search_excerpt(bif:vector('ez','az','els'), ?v4))) as ?_n_f_t_m_ex_ where { { ?r <http://akonadi-project.org/ontologies/aneo#akonadiItemId> ?reqProp1 . { ?r

The "qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1" command close does nothing.

According to htop:
 PID USER     PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command                                                                                                                                           
 3793 user       39  19  715M  324M  6944 S 196.  3.2  6:57.81 /usr/bin/virtuoso-t +foreground +configfile /tmp/virtuoso_nS3483.ini +wait

The only solution:
watch -n 1 "pkill -9 virtuoso-t"
Comment 100 Søren Holm 2012-04-26 07:42:42 UTC
I can confirm that it is not fixed. Killing virtuoso-t must be done with -9. After that virtuoso-t starts up again ending a a sane cpu workload.
Comment 101 Alejandro Nova 2012-04-26 11:35:12 UTC
I've observed this. However, the anomalous behavior disappears when I increase, in the Nepomuk config interface, the memory limit from 50 MB to 128 MB (64 bit) or 96 MB (32 bit). It seems 50 MB is too little.
Comment 102 Wolfgang Mader 2012-05-06 20:45:23 UTC
I still see variants of this bug with KDE 4.8.3, virtuoso 6.1.5-1, strigi 0.7.7-1, and Akonadi 1.7.2-1. I have the feeling that is is connected to mail indexing since whenever virtiuoso-t eats up the cpu, kmail does not respond any more.

This bug report is marked as fixed. Does this mean that further activity on this bus is not planed and bug reports are not necessery any more?
Comment 103 Diego Viola 2012-05-06 21:45:30 UTC
It seems that Virtuoso has caused lots of pain to users and developers, what I don't understand is why Virtuoso was choosen by the KDE developers.

Why not choose another database engine that is known to perform *better* like MongoDB or PostgreSQL?

Was virtuoso choosen because of licensing or what? It doesn't make sense to pick on a technology that is leaky like this.
Comment 104 Wolfgang Mader 2012-05-06 21:51:55 UTC
(In reply to comment #103)
I am not a database guy, but Ivan Cukic[1] seems to have a valid point why virtuoso was chosen. For the task a graph based database seems to be needed, and virtuoso is a fast one of this kind.

[1] http://ivan.fomentgroup.org/blog/2012/05/03/nepomuk-dont-misuse/
Comment 105 Diego Viola 2012-05-06 22:55:02 UTC
(In reply to comment #104)
> (In reply to comment #103)
> I am not a database guy, but Ivan Cukic[1] seems to have a valid point why
> virtuoso was chosen. For the task a graph based database seems to be needed,
> and virtuoso is a fast one of this kind.
> 
> [1] http://ivan.fomentgroup.org/blog/2012/05/03/nepomuk-dont-misuse/

Well, I'm not really questioning their decision, I do respect the decisions KDE developers take, but Virtuoso has been nothing but problems for most users and this bug report is the proof of it.

Perhaps we should re-evaluate their decisions if the Virtuoso developers don't care about fixing their software?
Comment 106 Diego Viola 2012-05-07 04:44:22 UTC
Sorry Virtuoso is open source and perhaps we should help fix the leaks instead.
Comment 107 Unknown 2012-05-26 12:15:03 UTC
Please, reopen this ticket, because this issue is certainly NOT fixed.

Now, after resuming from sleep, virtuso_t ate three CPU cores at 100 % and one other at 50 %…
KDE: 4.8.3.

dbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1 queryString :
select distinct ?r ?reqProp1 (bif:concat(bif:search_excerpt(bif:vector('ez','az','els'), ?v8),bif:search_excerpt(bif:vector('ez','az','els'), ?v4))) as ?_n_f_t_m_ex_ where { { ?r <http://akonadi-project.org/ontologies/aneo#akonadiItemId> ?reqProp1 . { ?r

qdbus org.kde.nepomuk.services.nepomukqueryservice /nepomukqueryservice/query1 :
method void org.kde.nepomuk.Query.close()
signal void org.kde.nepomuk.Query.entriesRemoved(QDBusRawType::a(sda{s(isss entries)
signal void org.kde.nepomuk.Query.entriesRemoved(QStringList entries)
signal void org.kde.nepomuk.Query.finishedListing()
method bool org.kde.nepomuk.Query.isListingFinished()
method void org.kde.nepomuk.Query.list()
method void org.kde.nepomuk.Query.listen()
signal void org.kde.nepomuk.Query.newEntries(QDBusRawType::a(sda{s(isss entries)
method QString org.kde.nepomuk.Query.queryString()
signal void org.kde.nepomuk.Query.resultCount(int count)
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface_name, QString property_name)
method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface_name)
method void org.freedesktop.DBus.Properties.Set(QString interface_name, QString property_name, QDBusVariant value)
method QString org.freedesktop.DBus.Introspectable.Introspect()

Please, do reopen this issue, otherwise it will never get the required amount of attention to get it fixed.
Thank you.
Comment 108 Vishesh Handa 2012-05-26 12:52:15 UTC
(In reply to comment #107)
> Please, reopen this ticket, because this issue is certainly NOT fixed.
> 
> Now, after resuming from sleep, virtuso_t ate three CPU cores at 100 % and
> one other at 50 %…
> KDE: 4.8.3.
> 
> dbus org.kde.nepomuk.services.nepomukqueryservice
> /nepomukqueryservice/query1 queryString :
> select distinct ?r ?reqProp1
> (bif:concat(bif:search_excerpt(bif:vector('ez','az','els'),
> ?v8),bif:search_excerpt(bif:vector('ez','az','els'), ?v4))) as ?_n_f_t_m_ex_
> where { { ?r <http://akonadi-project.org/ontologies/aneo#akonadiItemId>
> ?reqProp1 . { ?r
> 

Hey.

Based on the query string, this seems like a truncated query that akonadi is sending Nepmuk. There is already a separate bug report for that, and AFAIK the Akonadi developers are trying to fix it.

So, I'm not changing the status of this bug report, as this is a different issue. However, if some other query seems to be consuming the cpu, please let us know.
Comment 109 Unknown 2012-05-26 13:02:37 UTC
(In reply to comment #108)
In this case, if it's not the same issue, it's unnecessary to reopen this one, of course :) .
And if that bug is also being solved, is a great news, thank you.
Comment 110 Luca Manganelli 2012-06-01 06:15:35 UTC
On KDE 4.8.3 - ArchLinux, I have two Virtuoso processes running at 50% each forever.
The funny thing is that running dbus I don't see any query:

qdbus org.kde.nepomuk.services.nepomukqueryservice
/
/nepomukqueryservice                                                                                                                           /servicecontrol


Disabling Nepomuk indexing (from the systray icon) solves this issue...
Comment 111 Andreas 2012-06-03 17:06:17 UTC
I see this behaviour with no open querys/querys closed. I am on Kubuntu 12.04 latest kde 4.8.3 updates. Its not fixed. The problem appears even if indexing is disabled in the system settings, often after sleep/hibernate. The only solution for me is "killall -s9 virtuoso-t"
Comment 112 Timothy Pederick 2012-06-05 10:27:50 UTC
Hmm. Perhaps I should have commented here, rather than on bug 293641. The description there seemed more specific (it specifies "no queries"), but I see that comments here are tending in that direction too.

I too am on Kubuntu 12.04 (amd64), KDE 4.8.3. I only started noticing this issue after some KDE package upgrades last month, although I suppose it's possible that I just didn't notice it before -- since it's using up just one core, it most often makes itself known by the fan whirring, not by poor performance in other apps. That said, it managed to max out both cores for a little while the other day, which is what prompted me to find this (these) bug report(s).
Comment 113 Mirza 2012-08-04 22:14:22 UTC
I have the same problem on KDE 4.9 on Kubuntu 12.04.
I had this problem also with 4.8.x.

Deleting nepomuk folders in ~/.kde/ only solved temporary the problem.
After sleep/hibernate or restart problem reapears.
As in the comment 110 there aro no active queries.

Please reopen this bug.

If I disable nepomuk completely problem is gone, but I like the new dolphin features, and they need nepomuk...
Comment 114 Thomas Platzer 2012-11-09 15:33:39 UTC
Please bear with me, I have a more general question regarding all this.

Virtuoso has troubled users for _years_ now. I can't recommend any KDE Desktop (which I'm a huge fan of) to any friends without linking to detailed explanations how to get rid of the components that will, to a high probability, make even the newest systems sluggish and unresponsive. Added on top of this is my personal frustration that it seems not possible to get desktop-indexing in KDE in a consistent and safe way.

We have a great Filemanager with Dolphin, but quite a few features that are prominently advertised are worse than useless with the current setup. People may invest a lot of time to add metadata to their stuff just to lose it ever so often, make their computer slow or only work sometimes.

I would absolutely *love* to tag and rate my stuff, but as of now, I'm glad I haven't done so if the fixes to make my computer usable again are mostly in the direction of deleting everything I've set up. That's like a kick to the face for people who actually use the features the KDE-Team seems so proud of, features that would indeed make a difference for the better when it comes to the handling of personal data.

From what I've read in the comments Virtuoso is the back-end, a graph database. A database that is prone to race-conditions, deadlocks and resource-leaks? How can a database be any less reliable than we have seen with Virtuoso/Nepomuk or whatever the thing is called? The whole approach seems so brittle that I have a heard time believing this was delivered to actual end-users for years now. I even stopped using KMail because it ties into this system and I've had enough of lost data, hangups and the general uneasy feeling that it can explode into my face any minute.

I _love_ and use KDE (except when 4 came out and I had to gnome for a year or two) for quite a time now, in fact when it became a usable successor to fvwm2 :). I think it's by far the best desktop for Linux - that's why I simply can't believe this whole virtuoso mess. It's baked so deep into to whole KDE experience that it's not trivial to get rid of it at all. Reminds me of the late 90's when you installed Windows and had a substantial to-do list on what to deactivate to make the system usable. Windows has learned, now KDE does this? Why? Semantic support on the desktop would be great, but every component that depends on the nepomuk system becomes suspectible to problems. It just seems to me that this technology is not ready for prime-time, and the problems multiply when more and more components want to tie into it.
 
It may sound polemic, but are there actual users where this whole mess just works and adds positively to their workflow? People who have rated and tagged their stuff and work with it on a day to day basis, without losing their stuff and cpu-fans whirring?

Any comments from the devs or QA? I mean besides explaining why this bug is closed and fixed already when it so painfully obviously isn't? Or adding fixes that only mask the problem by not making you too aware that there is a process constantly taxing the CPU? 
 
Sorry if this comes across as too harsh, I've tried to keep my frustration in check. I want to add that is's only things I care very much about can even get me this frustated. 
 
Long live KDE!

Best regards,
Thomas
Comment 115 Christoph Feck 2012-11-09 21:47:30 UTC
Thomas, the problem is that developers can only fix bugs they are able to reproduce. Just saying "it uses 100% cpu here" is not a detailed description of steps to reproduce.

Start with a fresh KDE user account, configure it in a way you can trigger the bug, and report both your configuration and exact steps to reproduce. Please report them as a new bug, because this one is too crowded and may describe different problems.

For more information, see http://trueg.wordpress.com/2012/07/04/debugging-nepomukvirtuosos-cpu-usage/ and http://techbase.kde.org/index.php?title=Development/Tutorials/Metadata/Nepomuk/TipsAndTricks

Also, in the future please use the KDE forums at https://forum.kde.org/ to reach other users.
Comment 116 Pascal Maillard 2012-11-11 16:34:17 UTC
Created attachment 75177 [details]
New crash information added by DrKonqi

kactivitymanagerd (1.0) on KDE Platform 4.8.5 (4.8.5) "release 2" using Qt 4.8.1

Hi, I want to provide a backtrace which could shed light on this bug:

- I killed virtuoso-t ("killall virtuoso-t"), because it was running for over an hour at >80% CPU (despite the fact that no files were being indexed)
- approximately 5 sec after the kill command, the process stopped running (as observed by "top")
- immediately after this, the KDE Activity Manager crashed and DrKonqi opened. The backtrace is attached

-- Backtrace (Reduced):
#6  0x00007f0a3e4a4606 in lockInline (this=0x936b68) at /usr/include/QtCore/qmutex.h:187
#7  QMutexLocker (m=0x936b68, this=<synthetic pointer>) at /usr/include/QtCore/qmutex.h:109
#8  Soprano::Client::SocketHandler::~SocketHandler (this=0x9f3930, __in_chrg=<optimized out>) at /usr/src/debug/soprano-2.7.6/client/clientconnection.cpp:58
#9  0x00007f0a3e4a4739 in Soprano::Client::SocketHandler::~SocketHandler (this=0x9f3930, __in_chrg=<optimized out>) at /usr/src/debug/soprano-2.7.6/client/clientconnection.cpp:61
#10 0x00007f0a429e1bd0 in QThreadStorageData::set (this=<optimized out>, p=0x9ccea0) at thread/qthreadstorage.cpp:165
Comment 117 Pascal Maillard 2012-11-11 16:48:48 UTC
I want to add to my previous comment that I have deleted my Nepomuk database a week ago, since then all files and e-mails seem to have been indexed and there have not been any KDE updates.