Bug 251795 - Nepomuk related crash on startup (4.6) [QMutex::lock, QMutexLocker, Soprano::Client::SocketHandler::~SocketHandler, ..., Nepomuk::MainModel::init]
Summary: Nepomuk related crash on startup (4.6) [QMutex::lock, QMutexLocker, Soprano::...
Status: RESOLVED DUPLICATE of bug 286627
Alias: None
Product: nepomuk
Classification: Miscellaneous
Component: libnepomukcore (show other bugs)
Version: 4.6
Platform: Mandriva RPMs Linux
: NOR crash
Target Milestone: ---
Assignee: Sebastian Trueg
URL:
Keywords:
: 237120 258365 259453 259454 259455 261465 263203 263682 264504 268367 275389 277138 277139 277503 278546 278945 279397 280011 281492 282595 283868 284044 285181 295063 (view as bug list)
Depends on:
Blocks:
 
Reported: 2010-09-20 09:40 UTC by Olivier LAHAYE
Modified: 2012-03-08 21:45 UTC (History)
27 users (show)

See Also:
Latest Commit:
Version Fixed In:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Olivier LAHAYE 2010-09-20 09:40:35 UTC
Application: akonadi_nepomuk_contact_feeder (0.1)
KDE Platform Version: 4.5.68 (4.6 >= 20100912)
Qt Version: 4.7.0
Operating System: Linux 2.6.35.4-desktop-1mnb x86_64
Distribution: "Mandriva Linux 2010.1"

-- Information about the crash:
nepomuk crashed shortly after logging in. maybe juste after session restoration.

The crash can be reproduced every time.

-- Backtrace:
Application: Akonadi Agent (akonadi_nepomuk_contact_feeder), signal: Segmentation fault
[KCrash Handler]
#6  0x00007f437eb9111c in QMutex::lock (this=0x1fa7628) at thread/qmutex.cpp:151
#7  0x00007f4378c611c1 in QMutexLocker (this=0x2069250, __in_chrg=<value optimized out>) at /usr/lib/qt4/include/QtCore/qmutex.h:102
#8  Soprano::Client::SocketHandler::~SocketHandler (this=0x2069250, __in_chrg=<value optimized out>) at /usr/src/debug/soprano-2.5.62/client/clientconnection.cpp:58
#9  0x00007f4378c61289 in Soprano::Client::SocketHandler::~SocketHandler (this=0x2069250, __in_chrg=<value optimized out>) at /usr/src/debug/soprano-2.5.62/client/clientconnection.cpp:61
#10 0x00007f437eb93ce6 in QThreadStorageData::set (this=<value optimized out>, p=0x20b4780) at thread/qthreadstorage.cpp:148
#11 0x00007f4378c5ec12 in qThreadStorage_setLocalData<Soprano::Client::SocketHandler> (this=0x205f480) at /usr/lib/qt4/include/QtCore/qthreadstorage.h:92
#12 setLocalData (this=0x205f480) at /usr/lib/qt4/include/QtCore/qthreadstorage.h:148
#13 Soprano::Client::ClientConnection::socketForCurrentThread (this=0x205f480) at /usr/src/debug/soprano-2.5.62/client/clientconnection.cpp:95
#14 0x00007f4378c5ec59 in Soprano::Client::ClientConnection::connectInCurrentThread (this=<value optimized out>) at /usr/src/debug/soprano-2.5.62/client/clientconnection.cpp:754
#15 0x00007f4378c5e08f in Soprano::Client::LocalSocketClient::connect (this=0x205e018, name=...) at /usr/src/debug/soprano-2.5.62/client/localsocketclient.cpp:141
#16 0x00007f437e646dfb in (anonymous namespace)::GlobalModelContainer::init (this=0x205dff0, forced=<value optimized out>)
    at /usr/src/debug/kdelibs-4.5.68svn1174542/nepomuk/core/nepomukmainmodel.cpp:102
#17 0x00007f437e647549 in Nepomuk::MainModel::init (this=0x205c070) at /usr/src/debug/kdelibs-4.5.68svn1174542/nepomuk/core/nepomukmainmodel.cpp:176
#18 0x00007f437e6409e4 in Nepomuk::ResourceManager::init (this=0x2061510) at /usr/src/debug/kdelibs-4.5.68svn1174542/nepomuk/core/resourcemanager.cpp:329
#19 0x00007f437e641fee in Nepomuk::ResourceManagerPrivate::_k_storageServiceInitialized (this=0x205cf10, success=<value optimized out>)
    at /usr/src/debug/kdelibs-4.5.68svn1174542/nepomuk/core/resourcemanager.cpp:220
#20 0x00007f437e642385 in Nepomuk::ResourceManager::qt_metacall (this=0x2061510, _c=QMetaObject::InvokeMetaMethod, _id=<value optimized out>, _a=0x7fffdab9f600)
    at /usr/src/debug/kdelibs-4.5.68svn1174542/build/nepomuk/resourcemanager.moc:90
#21 0x00007f437e8c92b8 in QDBusConnectionPrivate::deliverCall (this=0x2020300, object=0x2061510, msg=..., metaTypes=..., slotIdx=9) at qdbusintegrator.cpp:916
#22 0x00007f437e8d455f in QDBusCallDeliveryEvent::placeMetaCall (this=<value optimized out>, object=<value optimized out>) at qdbusintegrator_p.h:103
#23 0x00007f437ec92aba in QObject::event (this=0x2061510, e=<value optimized out>) at kernel/qobject.cpp:1211
#24 0x00007f437c9926e4 in QApplicationPrivate::notify_helper (this=0x1f57e30, receiver=0x2061510, e=0x208dd10) at kernel/qapplication.cpp:4396
#25 0x00007f437c99715a in QApplication::notify (this=<value optimized out>, receiver=0x2061510, e=0x208dd10) at kernel/qapplication.cpp:4277
#26 0x00007f437facd766 in KApplication::notify (this=0x7fffdaba00e0, receiver=0x2061510, event=0x208dd10) at /usr/src/debug/kdelibs-4.5.68svn1174542/kdeui/kernel/kapplication.cpp:310
#27 0x00007f437ec7e73c in QCoreApplication::notifyInternal (this=0x7fffdaba00e0, receiver=0x2061510, event=0x208dd10) at kernel/qcoreapplication.cpp:732
#28 0x00007f437ec81ee5 in sendEvent (receiver=0x0, event_type=0, data=0x1f405e0) at kernel/qcoreapplication.h:215
#29 QCoreApplicationPrivate::sendPostedEvents (receiver=0x0, event_type=0, data=0x1f405e0) at kernel/qcoreapplication.cpp:1373
#30 0x00007f437eca96c3 in sendPostedEvents (s=0x1f5b900) at kernel/qcoreapplication.h:220
#31 postEventSourceDispatch (s=0x1f5b900) at kernel/qeventdispatcher_glib.cpp:277
#32 0x00007f4378ede193 in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0
#33 0x00007f4378ede970 in ?? () from /usr/lib64/libglib-2.0.so.0
#34 0x00007f4378edec0d in g_main_context_iteration () from /usr/lib64/libglib-2.0.so.0
#35 0x00007f437eca985f in QEventDispatcherGlib::processEvents (this=0x1f3fcd0, flags=<value optimized out>) at kernel/qeventdispatcher_glib.cpp:415
#36 0x00007f437ca3674e in QGuiEventDispatcherGlib::processEvents (this=<value optimized out>, flags=<value optimized out>) at kernel/qguieventdispatcher_glib.cpp:204
#37 0x00007f437ec7dad2 in QEventLoop::processEvents (this=<value optimized out>, flags=...) at kernel/qeventloop.cpp:149
#38 0x00007f437ec7dd1c in QEventLoop::exec (this=0x7fffdaba0040, flags=...) at kernel/qeventloop.cpp:201
#39 0x00007f437ec8219b in QCoreApplication::exec () at kernel/qcoreapplication.cpp:1009
#40 0x00007f437f03dd3e in Akonadi::AgentBase::init (r=0x20652a0) at /usr/src/debug/kdepimlibs-4.5.68svn1174542/akonadi/agentbase.cpp:512
#41 0x000000000040cb98 in int Akonadi::AgentBase::init<Akonadi::NepomukContactFeeder>(int, char**) ()
#42 0x00007f437d6aeafd in __libc_start_main () from /lib64/libc.so.6
#43 0x00000000004097f9 in _start ()

Reported using DrKonqi
Comment 1 Dario Andres 2010-12-10 19:17:00 UTC
[Comment from a bug triager]
Bug 259454 is from "akonadi_nepomuk_email_feeder"
Bug 259455 also has a related backtrace, but it is not about an Akonadi app, but Plasma activities manager daemon
Regards
Comment 2 Dario Andres 2010-12-10 19:17:05 UTC
*** Bug 259453 has been marked as a duplicate of this bug. ***
Comment 3 Dario Andres 2010-12-10 19:17:07 UTC
*** Bug 259454 has been marked as a duplicate of this bug. ***
Comment 4 Aaron J. Seigo 2011-01-19 22:38:16 UTC
*** Bug 263682 has been marked as a duplicate of this bug. ***
Comment 5 Aaron J. Seigo 2011-01-19 22:38:47 UTC
*** Bug 259455 has been marked as a duplicate of this bug. ***
Comment 6 Aaron J. Seigo 2011-01-19 22:38:58 UTC
*** Bug 263203 has been marked as a duplicate of this bug. ***
Comment 7 Aaron J. Seigo 2011-01-19 22:40:42 UTC
we're seeing this crash in multiple nepomuk-using apps now, some with the same user reporting the same crash in each of these apps. re-assigning to nepomuk.
Comment 8 Christophe Marin 2011-01-27 22:03:47 UTC
*** Bug 264504 has been marked as a duplicate of this bug. ***
Comment 9 Dario Andres 2011-03-13 14:02:38 UTC
[Comment from a bug triager]
From bug 268367 (KDE SC 4.6.1):
- What I was doing when the application crashed:
I just logged-in to KDE4.6.1, KMail started (I have it as autostart) and
then Activity manager crashed, at the same time 3 windows of DRkonqi appeared,
sayin that Akonadi Agent crashed as well.
Comment 10 Dario Andres 2011-03-13 14:02:43 UTC
*** Bug 268367 has been marked as a duplicate of this bug. ***
Comment 11 Beat Wolf 2011-06-11 22:30:36 UTC
*** Bug 275389 has been marked as a duplicate of this bug. ***
Comment 12 Christophe Marin 2011-07-05 20:39:10 UTC
*** Bug 277139 has been marked as a duplicate of this bug. ***
Comment 13 Christophe Marin 2011-07-05 20:39:37 UTC
from bug 277139:

-- Information about the crash:
- What I was doing when the application crashed: I had kontact open and
disabled nepomuk because I had to test some things, i.e. virtuoso-t was
constanlty using one core and stuck in a loop, i.e. did not quit even minutes
after nepomuk was already disabled.

I got some notifications about nepomuk not being available anymore. When I
finished anylysing the virtuoso-t process I had to kill it and restart nepomuk.
After this was done I tried to clear the filter in kontact by clicking on the
"x" within the input line. Kontact was busy (I guess coping with nepomuk's
restart) and that's when the crash happened. An akonadi agent (nepomuk email
feeder bug 277138) crashed and kontact crashed.
Comment 14 Aaron J. Seigo 2011-07-18 13:02:55 UTC
*** Bug 277503 has been marked as a duplicate of this bug. ***
Comment 15 Dario Andres 2011-08-07 13:28:29 UTC
[Comment from a bug triager]
From bug 278945 (KDE SC 4.7.0):
- What I was doing when the application crashed:
I started dolphin with nepomuk/strigi disabled. Then I started those services
which apparently made dolphin crash.

From bug 279397 (kactivitymanagerd crashed, KDE SC 4.7.0)
- What I was doing when the application crashed:
logging in into a new session of KDE after making some software updates
involving Zypper
Comment 16 Dario Andres 2011-08-07 13:28:34 UTC
*** Bug 278945 has been marked as a duplicate of this bug. ***
Comment 17 Dario Andres 2011-08-07 13:28:37 UTC
*** Bug 279397 has been marked as a duplicate of this bug. ***
Comment 18 Christophe Marin 2011-08-28 16:24:03 UTC
*** Bug 280011 has been marked as a duplicate of this bug. ***
Comment 19 Sebastian Trueg 2011-09-21 17:37:03 UTC
*** Bug 281492 has been marked as a duplicate of this bug. ***
Comment 20 Sebastian Trueg 2011-09-21 17:37:47 UTC
Can this be reproduced in KDE 4.6.x or 4.7?
Comment 21 Cyrille Dunant 2011-09-21 18:11:29 UTC
On Wednesday 21 Sep 2011 17:37:47 Sebastian Trueg wrote:
> https://bugs.kde.org/show_bug.cgi?id=251795
> 
> 
> Sebastian Trueg <trueg@kde.org> changed:
> 
>            What    |Removed                     |Added
> ----------------------------------------------------------------------------
> Status|UNCONFIRMED                 |ASSIGNED
>                  CC|                            |trueg@kde.org
>      Ever Confirmed|0                           |1
> 
> 
> 
> 
> --- Comment #20 from Sebastian Trueg <trueg kde org>  2011-09-21 17:37:47
> --- Can this be reproduced in KDE 4.6.x or 4.7?

I have not seen this bug on 4.7.1
Comment 22 Sebastian Trueg 2011-09-22 08:38:59 UTC
(In reply to comment #21)
> I have not seen this bug on 4.7.1

Did you experience it before?
Comment 23 Christophe Marin 2011-09-22 09:19:48 UTC
I have a coredump from yesterday (despite the appearances, that's kdelibs 4.7):

Program terminated with signal 11, Segmentation fault.
#0  0x00007fd4b74fa5dc in QMutex::lock (this=0x6595d8) at thread/qmutex.cpp:151
151         if (d->recursive) {
(gdb) thread apply all bt

Thread 2 (Thread 0x7fd4a1600700 (LWP 18910)):
#0  0x00007fd4b536a843 in select () at ../sysdeps/unix/syscall-template.S:82
#1  0x00007fd4b75c7301 in QProcessManager::run (this=0x7fd4b7915f80) at io/qprocess_unix.cpp:245
#2  0x00007fd4b74ff015 in QThreadPrivate::start (arg=0x7fd4b7915f80) at thread/qthread_unix.cpp:331
#3  0x00007fd4b726eeb5 in start_thread (arg=0x7fd4a1600700) at pthread_create.c:301
#4  0x00007fd4b53711ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 1 (Thread 0x7fd4a6be2760 (LWP 18790)):
#0  0x00007fd4b74fa5dc in QMutex::lock (this=0x6595d8) at thread/qmutex.cpp:151
#1  0x00007fd4b8768374 in QMutexLocker (m=0x6595d8, this=<synthetic pointer>) at /usr/include/QtCore/qmutex.h:102
#2  Soprano::Client::SocketHandler::~SocketHandler (this=0x65a8b0, __in_chrg=<optimized out>) at /usr/src/debug/soprano-2.7.50_20110921/client/clientconnection.cpp:58
#3  0x00007fd4b8768479 in Soprano::Client::SocketHandler::~SocketHandler (this=0x65a8b0, __in_chrg=<optimized out>) at /usr/src/debug/soprano-2.7.50_20110921/client/clientconnection.cpp:61
#4  0x00007fd4b74fd457 in QThreadStorageData::set (this=0x6cfd80, p=0x65aa10) at thread/qthreadstorage.cpp:165
#5  0x00007fd4b8765ed0 in qThreadStorage_setLocalData<Soprano::Client::SocketHandler> (d=<optimized out>, t=<optimized out>) at /usr/include/QtCore/qthreadstorage.h:92
#6  setLocalData (t=0x65aa10, this=<optimized out>) at /usr/include/QtCore/qthreadstorage.h:148
#7  Soprano::Client::ClientConnection::socketForCurrentThread (this=0x767ec0) at /usr/src/debug/soprano-2.7.50_20110921/client/clientconnection.cpp:95
#8  0x00007fd4b8765f49 in Soprano::Client::ClientConnection::connectInCurrentThread (this=<optimized out>) at /usr/src/debug/soprano-2.7.50_20110921/client/clientconnection.cpp:754
#9  0x00007fd4b876559a in Soprano::Client::LocalSocketClient::connect (this=0x656cc8, name="/tmp/ksocket-krop/nepomuk-socket") at /usr/src/debug/soprano-2.7.50_20110921/client/localsocketclient.cpp:141
#10 0x00007fd4b6fe80a9 in init (forced=true, this=0x656ca0) at /usr/src/debug/kdelibs-4.7.42_20110921/nepomuk/core/nepomukmainmodel.cpp:102
#11 Nepomuk::MainModel::init (this=0x656be0) at /usr/src/debug/kdelibs-4.7.42_20110921/nepomuk/core/nepomukmainmodel.cpp:176
#12 0x00007fd4b6fe1397 in Nepomuk::ResourceManager::init (this=0x654710) at /usr/src/debug/kdelibs-4.7.42_20110921/nepomuk/core/resourcemanager.cpp:331
#13 0x00007fd4b6fe4445 in Nepomuk::ResourceManagerPrivate::_k_storageServiceInitialized (this=0x6540a0, success=<optimized out>)
    at /usr/src/debug/kdelibs-4.7.42_20110921/nepomuk/core/resourcemanager.cpp:221
#14 0x00007fd4b6fe4545 in Nepomuk::ResourceManager::qt_metacall (this=0x654710, _c=QMetaObject::InvokeMetaMethod, _id=<optimized out>, _a=0x7fffbfb68ce0)
    at /usr/src/debug/kdelibs-4.7.42_20110921/build/nepomuk/resourcemanager.moc:90
#15 0x00007fd4b26de9bb in QDBusConnectionPrivate::deliverCall (this=0x4fdd70, object=0x654710, msg=..., metaTypes=QList<int> = {...}, slotIdx=9) at qdbusintegrator.cpp:942
#16 0x00007fd4b26e7cdf in QDBusCallDeliveryEvent::placeMetaCall (this=<optimized out>, object=<optimized out>) at qdbusintegrator_p.h:103
#17 0x00007fd4b75fbfaa in QObject::event (this=0x654710, e=<optimized out>) at kernel/qobject.cpp:1226
#18 0x00007fd4b5d16be4 in notify_helper (e=0x720340, receiver=0x654710, this=0x48cdb0) at kernel/qapplication.cpp:4481
#19 QApplicationPrivate::notify_helper (this=0x48cdb0, receiver=0x654710, e=0x720340) at kernel/qapplication.cpp:4453
#20 0x00007fd4b5d1ba71 in QApplication::notify (this=0x7fffbfb697b0, receiver=0x654710, e=0x720340) at kernel/qapplication.cpp:4360
Comment 24 Sebastian Trueg 2011-09-22 09:36:08 UTC
(In reply to comment #23)
> I have a coredump from yesterday (despite the appearances, that's kdelibs 4.7):

Is it 4.7.1 or 4.7.0, that is the question.
Comment 25 Christophe Marin 2011-09-22 10:09:06 UTC
git snapshots from the 4.7 branch
Comment 26 Rohan Garg 2011-09-22 13:34:34 UTC
I can (sort of)reproduce this bug with Project Neon ( which is currently using kdelibs master , and pretty much everything else from master as well ). Currently as soon as i start up my session , nepomukindexer starts hogging all my CPU's ( note this is a 8 core machine ) and then i have to kill the indexer manually via a SIGTERM to bring everything back to normal, giving me a backtrace similar to the one attached ( i reported it as a bug which was marked as a duplicate of this one ).
Comment 27 Sebastian Trueg 2011-09-22 13:45:48 UTC
(In reply to comment #26)
> I can (sort of)reproduce this bug with Project Neon ( which is currently using
> kdelibs master , and pretty much everything else from master as well ).
> Currently as soon as i start up my session , nepomukindexer starts hogging all
> my CPU's ( note this is a 8 core machine ) and then i have to kill the indexer
> manually via a SIGTERM to bring everything back to normal, giving me a
> backtrace similar to the one attached ( i reported it as a bug which was marked
> as a duplicate of this one ).

not related to this bug: can you provide the file that makes nepomukindexer go wild?
Comment 28 Cyrille Dunant 2011-09-22 13:50:29 UTC
On Thursday 22 Sep 2011 13:45:48 Sebastian Trueg wrote:
> https://bugs.kde.org/show_bug.cgi?id=251795
> 
> 
> 
> 
> 
> --- Comment #27 from Sebastian Trueg <trueg kde org>  2011-09-22 13:45:48
> --- (In reply to comment #26)
> 
> > I can (sort of)reproduce this bug with Project Neon ( which is currently
> > using kdelibs master , and pretty much everything else from master as
> > well ). Currently as soon as i start up my session , nepomukindexer
> > starts hogging all my CPU's ( note this is a 8 core machine ) and then
> > i have to kill the indexer manually via a SIGTERM to bring everything
> > back to normal, giving me a backtrace similar to the one attached ( i
> > reported it as a bug which was marked as a duplicate of this one ).
> 
> not related to this bug: can you provide the file that makes nepomukindexer
> go wild?

I noticed (using the openSuse snapshots of -- I presume -- master) that what 
happens is this: some files are apparently slow to index (I must still have 
some of them on my disk, but mostly PDFs generated with inkscape: no text, 
mostly drawings) Then, when the indexer gets stuck, it launches another. Now 
if you have a bunch of these files, you may end up with 20 indexers running. 
Eventually, if you wait long enough, they'll complete their tasks and things 
will go back to normal. But in the meantime, the system gets hammered.

This may or may not be related.
Comment 29 Christophe Marin 2011-09-23 09:08:58 UTC
*** Bug 282595 has been marked as a duplicate of this bug. ***
Comment 30 Christophe Marin 2011-09-23 09:09:45 UTC
*** Bug 277138 has been marked as a duplicate of this bug. ***
Comment 31 Sebastian Trueg 2011-09-23 09:13:56 UTC
(In reply to comment #28)
> On Thursday 22 Sep 2011 13:45:48 Sebastian Trueg wrote:
> > https://bugs.kde.org/show_bug.cgi?id=251795
> > 
> > 
> > 
> > 
> > 
> > --- Comment #27 from Sebastian Trueg <trueg kde org>  2011-09-22 13:45:48
> > --- (In reply to comment #26)
> > 
> > > I can (sort of)reproduce this bug with Project Neon ( which is currently
> > > using kdelibs master , and pretty much everything else from master as
> > > well ). Currently as soon as i start up my session , nepomukindexer
> > > starts hogging all my CPU's ( note this is a 8 core machine ) and then
> > > i have to kill the indexer manually via a SIGTERM to bring everything
> > > back to normal, giving me a backtrace similar to the one attached ( i
> > > reported it as a bug which was marked as a duplicate of this one ).
> > 
> > not related to this bug: can you provide the file that makes nepomukindexer
> > go wild?
> 
> I noticed (using the openSuse snapshots of -- I presume -- master) that what 
> happens is this: some files are apparently slow to index (I must still have 
> some of them on my disk, but mostly PDFs generated with inkscape: no text, 
> mostly drawings) Then, when the indexer gets stuck, it launches another. Now 
> if you have a bunch of these files, you may end up with 20 indexers running. 
> Eventually, if you wait long enough, they'll complete their tasks and things 
> will go back to normal. But in the meantime, the system gets hammered.
> 
> This may or may not be related.

This should have been fixed already here: https://bugs.kde.org/show_bug.cgi?id=281779. Still it would be very helpful to get such a pdf in order to improve the indexing speed.
Comment 32 Rohan Garg 2011-09-25 14:44:59 UTC
Hi Sebastian
Looks like it was trying to index a 700 MB Kubuntu Oneiric amd64 ISO in my home folder which made nepomukindexer go crazy, i hope that helps. Let me know if you need any other info.
Comment 33 Sebastian Trueg 2011-09-29 14:04:35 UTC
*** Bug 258365 has been marked as a duplicate of this bug. ***
Comment 34 Sebastian Trueg 2011-09-30 15:40:44 UTC
*** Bug 237120 has been marked as a duplicate of this bug. ***
Comment 35 Sebastian Trueg 2011-09-30 15:42:19 UTC
*** Bug 278546 has been marked as a duplicate of this bug. ***
Comment 36 Sebastian Trueg 2011-09-30 15:42:36 UTC
*** Bug 261465 has been marked as a duplicate of this bug. ***
Comment 37 Christophe Marin 2011-10-12 17:43:05 UTC
*** Bug 283868 has been marked as a duplicate of this bug. ***
Comment 38 Beat Wolf 2011-10-27 06:57:15 UTC
*** Bug 284044 has been marked as a duplicate of this bug. ***
Comment 39 Christophe Marin 2011-10-29 12:31:52 UTC
*** Bug 285181 has been marked as a duplicate of this bug. ***
Comment 40 Sebastian Trueg 2011-11-18 08:22:42 UTC
Resolving as a duplicate of the newer bug instead of the other way around since I attached a patch to the new one.

*** This bug has been marked as a duplicate of bug 286627 ***
Comment 41 Christophe Marin 2012-03-08 21:45:47 UTC
*** Bug 295063 has been marked as a duplicate of this bug. ***