Bug 226432 - dolphin crash when dragging files into a foldef from ark
Summary: dolphin crash when dragging files into a foldef from ark
Status: RESOLVED DUPLICATE of bug 220877
Alias: None
Product: dolphin
Classification: Applications
Component: general (show other bugs)
Version: 16.12.2
Platform: Compiled Sources Linux
: NOR crash
Target Milestone: ---
Assignee: Peter Penz
Depends on:
Reported: 2010-02-12 01:23 UTC by greg.freemyer
Modified: 2010-02-12 14:02 UTC (History)
1 user (show)

See Also:
Latest Commit:
Version Fixed In:


Note You need to log in before you can comment on or make changes to this bug.
Description greg.freemyer 2010-02-12 01:23:14 UTC
Application: dolphin (1.4)
KDE Platform Version: 4.3.98 (KDE 4.3.98 (KDE 4.4 RC3)) "release 218" (Compiled from sources)
Qt Version: 4.6.1
Operating System: Linux x86_64
Distribution: "openSUSE 11.2 (x86_64)"

-- Information about the crash:
I had dolphin open to a folder on a samba share.

I had attempted to autoextract a zip file directly in dolphin to create a subdirectory and populate it with the 7 files in the zip.  For some reason it did 3 then started asking if I wanted to overwrite the already existing file.

I tried a few times to say yes, then opened the zip file in ark to manually drag and drop the files.  Same basic problem although I got a 4th file in the dir.

So I created a second extract folder and tried to drag files from ark into it.  3 or 4 files into this dolphin crashed.

 -- Backtrace:
Application: Dolphin (dolphin), signal: Segmentation fault
[Current thread is 1 (Thread 0x7f0b949d17f0 (LWP 32466))]

Thread 3 (Thread 0x7f0b814ee910 (LWP 439)):
#0  0x00007f0b8dceb2cd in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f0b80815621 in metronom_sync_loop () from /usr/lib64/libxine.so.1
#2  0x00007f0b8dce665d in start_thread () from /lib64/libpthread.so.0
#3  0x00007f0b9070614d in clone () from /lib64/libc.so.6
#4  0x0000000000000000 in ?? ()

Thread 2 (Thread 0x7f0b83c86910 (LWP 6273)):
#0  0x00007f0b8dceb049 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f0b919a8e73 in QMutexPrivate::wait (this=0x75a0f0, timeout=-1) at thread/qmutex_unix.cpp:84
#2  0x00007f0b919a49e5 in QMutex::lock (this=0x753690) at thread/qmutex.cpp:205
#3  0x00007f0b91aaf998 in relock (this=<value optimized out>) at ../../src/corelib/thread/qmutex.h:120
#4  QMutexLocker (this=<value optimized out>) at ../../src/corelib/thread/qmutex.h:102
#5  QMetaObject::activate (this=<value optimized out>) at kernel/qobject.cpp:3214
#6  0x00007f0b919a92fd in QThreadPrivate::finish (arg=<value optimized out>) at thread/qthread_unix.cpp:278
#7  0x00007f0b919a974d in ~__pthread_cleanup_class (this=<value optimized out>, __in_chrg=<value optimized out>) at /usr/include/pthread.h:535
#8  QThreadPrivate::start (this=<value optimized out>, __in_chrg=<value optimized out>) at thread/qthread_unix.cpp:253
#9  0x00007f0b8dce665d in start_thread () from /lib64/libpthread.so.0
#10 0x00007f0b9070614d in clone () from /lib64/libc.so.6
#11 0x0000000000000000 in ?? ()

Thread 1 (Thread 0x7f0b949d17f0 (LWP 32466)):
[KCrash Handler]
#5  0x00007f0b91aab2ac in QObjectPrivate::resetCurrentSender (receiver=0x7f0b7403a750, currentSender=0x7fff569e2240, previousSender=0x0) at kernel/qobject.cpp:407
#6  0x00007f0b91aafc47 in QMetaObject::activate (sender=0xcbc350, m=<value optimized out>, local_signal_index=<value optimized out>, argv=<value optimized out>) at kernel/qobject.cpp:3294
#7  0x00007f0b91aaffaf in QObject::destroyed (this=0x7f0b7403a750, _t1=0xcbc350) at .moc/release-shared/moc_qobject.cpp:149
#8  0x00007f0b91ab25d5 in QObject::~QObject (this=<value optimized out>, __in_chrg=<value optimized out>) at kernel/qobject.cpp:869
#9  0x00007f0b919a6fa9 in QThread::~QThread (this=0xcbc350, __in_chrg=<value optimized out>) at thread/qthread.cpp:411
#10 0x0000000000464790 in _start ()

Possible duplicates by query: bug 225931, bug 224656, bug 220879, bug 220877.

Reported using DrKonqi
Comment 1 Dario Andres 2010-02-12 14:02:21 UTC
Merging with bug 220877. Thanks

*** This bug has been marked as a duplicate of bug 220877 ***