Summary: | Navigating mounted network locations is extremely slow in Dolphin compared to command line | ||
---|---|---|---|
Product: | [Frameworks and Libraries] frameworks-kio | Reporter: | doc.evans |
Component: | general | Assignee: | KIO Bugs <kio-bugs-null> |
Status: | CONFIRMED --- | ||
Severity: | normal | CC: | a.samirh78, aextefyhzyohfubdhk, albertvaka, alvaro.aguilera, aspotashev, boot.efi, dashonwwIII, elvis.angelaccio, empire, endrebjorsvik, germano.massullo, glizda, grabner, james, johu, joseph, julakali, kdelibs-bugs, kokoko3k, lars.winterfeld, lars_kdebugs, lorefice2, m.lincetto, mads, mail, malcolm, martin, maxime.chassagneux, michael.berlin.xtreemfs, michiel, miranda, nate, open-development, peter.penz19, pier_andreit, pprkut, rue, sitter, softexpert, someuniquename, spamsuxx, stefbon, sushyad, t.hirsch, t.m.guymer, thomas.lassdiesonnerein, torpedr, unblended_icing552, voidpointertonull+bugskdeorg, yvan |
Priority: | NOR | Keywords: | efficiency-and-performance, usability |
Version: | 6.7.0 | ||
Target Milestone: | --- | ||
Platform: | unspecified | ||
OS: | Linux | ||
See Also: |
https://bugs.kde.org/show_bug.cgi?id=423481 https://bugs.kde.org/show_bug.cgi?id=423487 https://bugs.kde.org/show_bug.cgi?id=423492 https://bugs.kde.org/show_bug.cgi?id=423499 https://bugs.kde.org/show_bug.cgi?id=423500 https://bugs.kde.org/show_bug.cgi?id=423501 https://bugs.kde.org/show_bug.cgi?id=423502 https://bugs.kde.org/show_bug.cgi?id=423532 https://bugs.kde.org/show_bug.cgi?id=423187 |
||
Latest Commit: | http://commits.kde.org/kdelibs/6369b556ae9beef6888699d23b91326bac950ba4 | Version Fixed In: | |
Sentry Crash Report: | |||
Bug Depends on: | |||
Bug Blocks: | 290666, 290680, 291138 | ||
Attachments: |
fs requests of krusader reading /
fs requests of mc reading / fs requests of ls reading / script to generate a large content tree gwenview multiple files "move to" dolphin gdb_1 dolphin gdb_2 dolphin gdb_3 |
Description
doc.evans
2008-12-24 16:26:36 UTC
I truly hate to second-guess developers, but are you absolutely certain that this is a kio problem? It's not at all obvious to me why it would be. (The answer "Yes" will satisfy me that you have thought about it and concluded that indeed it is; I just want to be sure that you aren't confusing this with something to do with the fish kio_slave.) I can confirm this bug. It's still present in KDE 4.4.1. Whenever I navigate a directory up- or downwards dolphin says "Connecting" and "Initiating protocol" in the status bar. So the ssh session is obviously not being kept open, but initiated from start for every directory change. That's what makes navigating so slow. Please implement sessions for ssh (fish) in kio or dolphin and konqueror! 4.4.2 here. While i can confirm the slowness on sshfs filesystem (as well as cifs/smbfs), i don't think this is related to kio. It's just that dolphin seems to catch a lot of info out of files/directories... syncronously, blocking the gui. It seems a little better if i don't display some columns (like permissions, owner and so on). So, if the underlaying filesystem is slow, dolphin will hangs here and there. Konqueror from 3.5 is far snappier. I don't see how it can be an underlying filesystem problem. Responsiveness is great from the command line, so it seems to me that it has to be a KDE issue (whether it's a kio issue or not I'm less sure, but I'm as certain as I can be that it's a KDE problem). Bug confirmed in kde 4.3.3. Filesystem bugs can be excluded because - the bug is present even when browsing an smbfs/cifs mounted shares - konqueror from kde3.4 is much responsive Please, vote for this bug. I've made some benchmarks comparing dolphin,konqueror, krusader ,pcmanfm and thunar, they have to Show a remotely mounted sshfs directory with 1997 items (all folders) and scroll; all on a 10Mbps half duplex link. * Terminal (ls -la): 1.313sec * Dolphin (kde4.3.3): about 10 seconds (but scrolling is almost impossible as dolphin seems to be frozen, it is even hard to close it) * Konqueror (kde4.3.3): about 13 seconds (same issues with scrolling/freezing,closing) * Krusader (2.0.0 for kde4) with mimetype magic disabled: about 6 seconds (scrolling is still a pain, but faster than konqueror and dolphin for kde4) * krusader (1.9.0 for kde3): about 2 seconds (scrolling is very fast) * pcmanfm: about 1.5seconds (no issues in scrolling/closing) * thunar: about 1.5seconds (no issues in scrolling/closing) * rox: about 1.5seconds (no issues in scrolling/closing) Hoping this will help to clarify that issues with speed are *not (only)* related to the filesystem, so i think this should be addressed as a kio problem with slow filesystems. *** This bug has been confirmed by popular vote. *** When running tcpdump you can see that many packets (> 100) are transferred just when resizing the window or hovering above a directory. Dolphin is also very slow when copying files via ssh. I did a simple test and it took dolphin 4 minutes and 20 seconds to copy what you can copy with scp in just 2 minutes and 30 seconds. Twice as much. It seems that dolphin/krusader tries to read file/directory contents when I simply select the element. Directory listing is much slower than the one in the mc. Unfortunately, this is not usable. :( (In reply to comment #4) > I don't see how it can be an underlying filesystem problem. > > Responsiveness is great from the command line, so it seems to me that it has to > be a KDE issue (whether it's a kio issue or not I'm less sure, but I'm as > certain as I can be that it's a KDE problem). i'm seeing the same behavior in KDE 4.6.2. one additional data point -- kdevelop4 at shell, navigating sshfs mount is fast/responsive in Dolphin, slow/sluggish, as reported here but, navigating same tree in kdevelop4 is fast/responsive. for reference, i'm running kdevelop4-4.2.0-46.7.x86_64 dolphin-4.6.2-5.1.x86_64 Created attachment 59213 [details]
fs requests of krusader reading /
FUSE debug mode shows all FS requests that krusader made
Created attachment 59214 [details]
fs requests of mc reading /
FUSE debug mode shows all FS requests that mc made
Created attachment 59215 [details]
fs requests of ls reading /
FUSE debug mode shows all FS requests that 'ls -la' command made
I used 'bindfs -d' to grab all requests when using different methods to list directory. Results for 20 directories in the '/' are: Krusader: 258 requests. MC: 69 requests. ls -la: 49 requests (but note that it tries to read xattrs) ls: 27 requests (seems to be the optimal number, because it only calls getattr for each element that is a 'lstat' call for each file, and it's sufficient to form a full-featured listing) So, the Krusader does to many excessive requests. Even if KDE programs need all that /dir/.directory requests, it would be optimal to have only about 50 requests... I can confirm this on KDE 4.6.2, Dolphin 1.6.1 The remote directory is mounted using cifs with these options: directio,rsize=130048 Command line copy from remote CIFS server is ~30 MiB/s Dolphin copy from remote CIFS server is ~10MiB/s Konqueror copy from remote CIFS server is ~10MiB/s I want to confirm this bug, using kde 4.7.2 with opensuse tumbleweed 64bit and nfs4 transfers under dolphin. Dolphin freezes for up to 30 seconds then responds for 5 seconds, then freezes for 30 seconds and so on... Me too want to confirm this bug, using kde 4.7.2 with opensuse tumbleweed 64bit. if I transfer large files (10 movies 0.7GB each) using sftp://... protocol under dolphin, transfere starts, but if, after the transfere is started, I try to change directory dolphin freezes for minutes, in practice during transfering dolphin is unusable in that sftp://, if I use konqueror it doesn't happen. I would like to point light on this bug that is very ugly. Please, do not mix bugs. We're talking about mounted and (relatively) slow network filesystems. kio_sftp doesn't mount anything, nor tries to read a lot of attributes from the files. Obviously KIO thinks the file system is "local" while in fact it's remote. So it does operations that end up being too slow. I initially thought about previews and mimetype determination, but the first one can be turned off in the menus, and the second one is delayed anyway. Slow copying is another issue, possibly fixed by the much more recent commit 2cd2d1a4cfa1 on Sep 21, 2011, bug 257907 and bug 258497. Let's only talk about listing and reloading here, the bug title says "navigating". [My testcase for future reference: sshfs www.davidfaure.fr: /mnt/df ; cd /mnt/df/txt.git/objects ] Ah! With some breaking-in-gdb (poor-man's profiling), I found it. What's really slow is subdirectories, in detailed view mode, because it lists every subdir in order to display the number of items. And this it does NOT do in a delayed manner: #0 0x00007f76af211b0a in __getdents64 () from /lib64/libc.so.6 #1 0x00007f76af2114d1 in readdir64 () from /lib64/libc.so.6 #2 0x00007f76b46d0dba in KDirModel::data (this=0xccfce0, index=..., role=743246400) at /d/kde/src/4/kdelibs/kio/kio/kdirmodel.cpp:739 #3 0x00007f76a3809f8f in DolphinModel::data (this=0xccfce0, index=..., role=743246400) at /d/kde/src/4/kde-baseapps/dolphin/src/views/dolphinmodel.cpp:119 #4 0x00007f76b179436d in QSortFilterProxyModel::data (this=0xcdcd50, index=..., role=743246400) at itemviews/qsortfilterproxymodel.cpp:1716 #5 0x00007f76b46f51cd in QModelIndex::data (this=0x7fff1e66f0a0, arole=743246400) at /d/qt/4/qt-for-trunk/include/QtCore/../../src/corelib/kernel/qabstractitemmodel.h:398 #6 0x00007f76b46ed564 in KFileItemDelegate::Private::itemSize (this=0xcaa6c0, index=..., item=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:226 #7 0x00007f76b46f11e3 in KFileItemDelegate::Private::display (this=0xcaa6c0, index=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:987 #8 0x00007f76b46f0640 in KFileItemDelegate::Private::initStyleOption (this=0xcaa6c0, option=0x7fff1e66e800, index=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:857 #9 0x00007f76b46f2027 in KFileItemDelegate::paint (this=0xcaa510, painter=0x7fff1e66f570, option=..., index=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:1252 Fixed (skipped for nfs/smb mounts), see commit in next comment. Then there is the reading of icon name (and comment) in "foo/.directory" (KFileItem::iconName calling KFolderMimeTypePrivate::iconName). Even though it's cached (so it happens only once per dir), it still makes things quite slow. => Fixed too (skipped for nfs/smb mounts). The last issue seems to be when navigating away from a directory, KDirListerCache::forgetDirs calls manually_mounted for -each item-, which calls realFilePath, which stats... There's already a TODO for porting that code to Solid, assigned to afiestas ;-) Git commit 02b7eb5d92daae4373e7d38d2d952a688bd42079 by David Faure. Committed on 28/10/2011 at 19:12. Pushed by dfaure into branch 'KDE/4.7'. Detect network mounts and skip slow operations while listing them. On NFS/SMB/SSHFS, KIO no longers counts items in directories (e.g. for dolphin in detailed view mode), nor does it look at subdir/.directory files for custom icons or comments. This commit will improve speed greatly in kdelibs-4.7.4 and later. There's just one issue remaining, when navigating away from a directory with many items, but that's a TODO for afiestas -- if he agrees :) CCBUG: 178678 M +16 -1 kdecore/io/kfilesystemtype_p.cpp M +5 -3 kdecore/io/kfilesystemtype_p.h M +2 -1 kio/kio/kdirmodel.cpp M +43 -9 kio/kio/kfileitem.cpp M +7 -0 kio/kio/kfileitem.h A +55 -0 kio/kio/kfileitemaction_twomethods-reverted.diff http://commits.kde.org/kdelibs/02b7eb5d92daae4373e7d38d2d952a688bd42079 Thank you for looking and fixing this; I looked a bit to the code, wouldn't be better to leave some room for other filesystems too? I mean that as far as i understood, non "normal" filesystems seems to be hardcoded in kfilesystemtype_p; what if i would use an exotic one? There's still curlftpfs, httpfs and god only knows how many fuse implementations :) I'm not a c coder and i don't know how much trouble it could cause, but what about a .rc file filled with slow filesystems? Sure, feel free to give me a list of "network mounts" filesystems :-) A config file can't do, because there are two implementation of KFileSystemType: one which looks at the string returned by mount (the code where I added fuse.sshfs), and one (faster, and used on linux) which looks at statfs.f_type "magic numbers", see the code where I added SSHFS_SUPER_MAGIC. So to add detection of new filesystem types, I need "super-magic" numbers as well. I still want the fallback for unknown filesystems to be "local and normal", since that's the most feature-full case, and people seem to come up with new local filesystems all the time. curlftpfs and httpfs are based on fuse like sshfs, right? I guess I could provide a small test prog that outputs the statfs.f_type value for a given directory, unless someone can find a documentation about these values somewhere... If possible, I suggest automatically detecting all filesystems based on fuse. Also, it would be nice if the user could manually switch between "local" and "network" browsing modes for a location. (In reply to comment #24) > If possible, I suggest automatically detecting all filesystems based on fuse. You can't generalize like that. There are local filesystems based on fuse as well (like ntfs-3g). Treating them as remote filesystems would just be plain wrong. @David Faure:
> Detect network mounts and skip slow operations while listing them.
Is 'skip' intended as 'never do it' or just 'delay it'?
Because in the second case the logic could be used for network and local filesystems too (?)
(In reply to comment #23) > A config file can't do, because there are two implementation of > KFileSystemType: one which looks at the string returned by mount (the code > where I added fuse.sshfs), and one (faster, and used on linux) which looks at > statfs.f_type "magic numbers" Couldn't that file be in the format: [network_filesystems_strings] . . . [network_filesystems_super_magic] . . . (EOF) ...so that the first that matches automatically identifies the fs type? I intend the use of that file just as a fallback where the hardcoded values are unable to identify the filesystem, so the performance impact should not be high and the user will not need to wait for another bug report to be closed and another kde-libs to be released to add support to a newly created fs. > This commit will improve speed greatly in kdelibs-4.7.4 and later.
> There's just one issue remaining, when navigating away from a directory
> with many items, but that's a TODO for afiestas -- if he agrees :)
I do agree but, what do I have to do ?
Your words my commands.
Tony: and how would you figure out the "super magic" for these file systems, when I couldn't even find it in the header where these are usually defined, so I had to reverse-engineer it? Anyway, KDE is already very configurable, but I don't think this needs a config file, i.e. delegating the problem to users. This should rather be done right in the code, so that it only needs to be solved once, not by every user. Whatever you would have put in that config file, tell me, and I'll put it in the code :-) Alex: see `grep afiestas kdirlister.cpp` Hi, I'm the developer of the fuse client of the distributed file system XtreemFS (see www.xtreemfs.org for more information). Can you please also include our file system and recognize it as "network file system"? Our client reports a file system type of the form "xtreemfs@<server url>", so checking for "xtreemfs@" in the beginning would be sufficient: $ df -h /mnt Filesystem Size Used Avail Use% Mounted on xtreemfs@demo.xtreemfs.org/demo 20G 7.0G 13G 36% /mnt Btw: I cannot tell you another super magic number than the one sshfs does report. See: $ strace stat -f /mnt 2>&1|grep statfs|grep mnt statfs("/mnt", {f_type=0x65735546, f_bsize=131072, f_blocks=161229, f_bfree=104645, f_bavail=104645, f_files=2048, f_ffree=2048, f_fsid={0, 0}, f_namelen=1024, f_frsize=131072}) = 0 From my file system in userspace I don't have the possibility to set the struct member f_type. Instead I've found this super magic number in the Fuse kernel code: http://lxr.free-electrons.com/source/fs/fuse/inode.c#L50 So I guess you'll have to rely on the file system type to differ between fuse implementations. Best regards, Michael Git commit 8b57d2b80329c9d005145354bdd5db8de3d6ede6 by David Faure. Committed on 08/11/2011 at 11:20. Pushed by dfaure into branch 'KDE/4.7'. Sigh, with FUSE nothing is ever simple. Thanks to Michael Berlin for the information about that. Add xtreemfs. CCBUG: 178678 M +7 -5 kdecore/io/kfilesystemtype_p.cpp http://commits.kde.org/kdelibs/8b57d2b80329c9d005145354bdd5db8de3d6ede6 Looking at the code, it seems that curlftpfs is still missing, and from what i understood fuse has just one magic number. So, heres how a filesystem using curlftpfs appears in my mounts: Gozer ~ # mount|grep curlftpfs curlftpfs#ftp://s1/ on /mnt/ftp type fuse (rw,nosuid,nodev,noatime) curlftpfs#ftp://s2:212/T/ on /mnt/nasftp type fuse (rw,nosuid,nodev,noatime) thanks! http://commits.kde.org/kdelibs/02b7eb5d92daae4373e7d38d2d952a688bd42079 has a side effect on setups where the home directories of the users are mounted from an nfs-server. When using the Desktopicons Activity the icons on the Desktop all show the gear symbol because ~/Desktop is considered a "slow" folder. I can confirm this side effect, there is also a forum thread on this issue started by another user (http://forum.kde.org/viewtopic.php?f=18&t=98122). Konqueror has separate file size limits for the preview of local and remote files. How difficult would it be to implement a similar configuration to select whether icons should be displayed for remote directories 1.) either globally, or 2.) on a per-directory basis? Before KDE 4.7.4, I never noticed any performance difference on the desktop between a local and remote home directory (on a 100MBit ethernet network), so for me the proposed fix is actually a regression. Due to the vastly different requirements in different situations I think it is very hard to automatically decide whether icons should be displayed, therefore a user choice seems preferrable IMO. What do yout think? Kind regards, Markus BTW, supermagic criteria doesn't cover the case of encfs being mounted over sshfs, because encfs is not a remote filesystem in any way. I suppose we need a different decision here. A global option to halt any excessive search would be just great! Unfortunately, I doubt it would ever be implemented... It was me that started the thread in Comment 34. This side effect from a purely cosmetic point of view is pretty awful. It makes the folder view desktop widget look terrible as it only shows the mimetype icons not the proper link icon, so you ( with opensuse ) get by default a desktop widget with 5 or 6 cogs in it. It hardly sells the desktop. A little option in the widget to let you see the real icons in this case only ie the desktop folder would be nice . Is that possible ? Mal Although it is now almost one year until Christmas :-), I'd like to sketch a different solution: 1.) When the user requests to display the contents of a directory (including the case of the desktop folder), the system tries for a fixed amount of time (e.g., half a second) to load all icons in the same way as it is currently done for a local file system. 2.) If loading all icons completes successfully before the timeout, the folder content is displayed with proper icons. 3.) If the timeout is exceeded for whatever reason (e.g., requesting an sshfs-mounted directory containing many files), those items for which an icon has been loaded (if any) are displayed, for other items the cog wheel is displayed. 4.) After the timeout, loading the icons is continued in the background, and the view is updated continuously as more icons are loaded. Thereby the user can start browsing the folder contents without having to wait for all icons to be loaded. 5.) Those icons which are currently visible (taking into account if the user scrolls down) are fetched first. With this approach, the user interface would always feel responsive, and there is no need to define a criterion to decide whether or not to load the appropriate icons. The only parameter here is the timeout, which, however, is not very critical (it must be small enough to avoid unpleasant delays, and large enough to prevent even files on a local file system to show up as cog wheels for a tiny moment). I implemented a similar method in a photo browsing application to display photo thumbnails, and I really like its responsive behaviour when accessing a collection of ~10000 photos. What do you think about this proposal? Would it require a major redesign of the existing software to implement it? Kind regards, Markus I think that the Markus Grabner proposal is really good ! I've opened this bug 290680, that is related to this thread, and your workaround could fix it too ! I agree, having a full async filemanager (not only for thumbnails) would be perfect. How about just stopping non-critical processes after timeout instead of moving them to the background? I don't like to push the dust under the carpet. As these processes are async why not... @#39: It would not be like push the dust under the carpet, but more like to trow it away as you walk. What i imagine is just like thumbnails view, where what is really updated is just what is needed (aka what you see) Because it would be easier to implement and less error prone. This bug was opened in 2008 and is still unresolved. Going fully async. would probably create more problems than it solves. ...probably you're right, but dropping features because they're not easy to implement is (imho) a big lose. Anyway a big step forward in identify some network filesystem has been done, dolphin now is much snappier here with 4.7.4, so there would be no (so much) hurry anymore. But since the 4.7.4, a regression with the personal icon display feature was disappeared ! We have NFS mounted \home directories in our office with Gbit network. We never had any performance issues, but now our users are complaining that the number of elements of a folder are not displayed anymore. So the possibility to enable/disable the directory listing for NFS would be really great. In kde 4.8 dolphin shows the number of items contained in directories as soon as you scroll, good! Personal icon display is still missing, but i bet it will be the next step. Re comments 33, 34, and 36 : this looks like a bug in folderview. If it's still not fixed, please report it there, I don't maintain that code. About the idea of making this delayed: yes it's probably the best solution. We already do delayed mimetype detection (for cases where the extension is unknown or ambiguous), we could also do delayed ".directory-file loading". Peter, what do you think? I'm not sure how much dolphin is still using kdelibs for the delayed mimetype determination stuff... > Peter, what do you think? I'm not sure how much dolphin is
> still using kdelibs for the delayed mimetype determination stuff...
@David: Dolphin only uses KFileItem and KDirLister from kdelibs. So in case if we do a delayed mimetype detection it would absolutely fine from a Dolphin point of view that the .directory file is read asynchronously from kdelibs and the icon is updated in Dolphin later. If you have time to implement this, please just forward me the patch so that I can check it in the scope of Dolphin (or keep me on CC when pushing the patch). Thanks!
HI, I'm working with a FUSE fs and experienced the same. The filesystem provides by design a .directory which fits at the target, like a smb share, with a smb share icon and like a hard disk, with a harddrive icon, and a usb stick, also with a removable pendrive icon. This is not slow at all, so the very strickt choice to not try to process the .directory is not the best option in my opinion. In my opinion a heuristic manner, like described in comment 37 is better. And maybe some support of other services as well. I develop a successor of gamin, the filesystem change notifier, and think of enabling some caching abilities. Caching the .directory file is one of the possibilities.... Stef Hi, I've been looking for a problem of this issue. I aggree that a backthround thread or other process doing the (slow) reading of mimetype and .directory files might work and bring the reading a lot faster. But the problem here is what causing this. First the determination of the filetype (mimetype for regular files). It can be slow, and doing that over and over again s silly. Why not use the field st_rdev part of the stat struct. It's not used for regular files, and large enough to hold an unique id to a (standard) database of filetypes. This st_rdev attribute is only used special files as far as I know, and is zero for regular files and directories. On my machine the type is or unsigned long int, or better unsigned long long int. In any case it's big enough. It's safe to use an id, A mp3 file will not turn in a ogg file suddenly, a pdf will never become a doc or something else. Apps creating these files can set this value, and apps using the file can set it when it's not been set before, or correct it when not set to the right value. When using this value, there is no other process required, it comes to an app like dolphin just like the mode (filetype, owner, group, permissions, times) does. Futher when it comes to directories, the reading of the .directory file is a time consuming task. In my FUSE fs, I'm using the mimetype for directories like: mimetype="inode/directory;role=remote.net.smb.network" for the directory representing the SMB network for example. More examples: local.map.system.programs local.dev.cdrom.audio local.map.documents local.map.audio etc. Because the mimetype gives you a default icon (just like this mimetype does for regular files) it's not required to use a .directory file for setting and using the .directory file for an icon. And it's possible to store this mimetype for directories also in the st_rdev field. I'm not saying that .directory files are useless, but using the mimetype for directories can speed up things. And - as I see it - directories can have a special meaning within the context of the computer or the user, and till now there is no way to set this. Using the role part of the mimetype makes that possible. Stef Same problem with icons showing as cog wheels on KDE 4.7.4 (debian testing) with NFS home. I never had any slowness problem with KDE before, so did you remove this feature? Clearly the solution to make everyone happy is to load icons delayed, just like mimetype determination and previews. With fast filesystems users won't notice any difference, and with slow filesystems, users will get a directory listing as fast as possible, and details later on. This issue drives me crazy. Browsing a NFS/CIFS share which is mounted via my DSL Internet connection is no fun - I have to do it multiple times a day... A simple option which prevents Dolphin from reading the mime types and especially recursively counting the numbers of items (which is totally uninteresting by the way) in the displayed folders would do the trick for me - removing the tab does in my test not stop Dolphin from counting... Cheers, Bjoern I think Dolphin's developers should use a slow, high latency connection to a remote folder-tree with thousands of files as their _primary_ test environment. If a feature works there well, then it will also shine with the local SSD drive. And if it doesn't, then it's probably a bad idea to add that feature. Just bumped into this problem with a smb directory, mounted with mount.cifs. There are 10k avi video files there, and opening it in dolphin shows empty directory for more than 3 minutes, while loading. (this is over wi-fi connection) Opening same directory with KIO (smb:// link) loads and show directory contens iteratively, but still takes over 3 minutes to load completely >
> Git commit 6369b556ae9beef6888699d23b91326bac950ba4 by David Faure.
> Committed on 27/03/2013 at 14:29.
> Pushed by dfaure into branch 'KDE/4.10'.
>
> Implement delayed loading of slow icons (from .desktop or .directory files)
>
> Thanks to Pierre Sauter for making me fix this, and for testing that it
> works over sshfs.
>
> FIXED-IN: 4.10.2
>
> M +19 -3 kio/kio/kfileitem.cpp
> M +6 -0 kio/kio/kfileitem.h
> M +2 -0 kio/tests/kfileitemtest.cpp
>
> http://commits.kde.org/kdelibs/6369b556ae9beef6888699d23b91326bac950ba4
Upgraded to LinuxMint 16 with KDE 4.11.3 and Dolphin 4.11.3 and this bug seems to be back. Worked fine on LM 15. Please let me know if you need more information. I was just experiencing a similar problem when browsing a CIFS-mounted share with a slow implementation of the dfree_command (using greyhole storage pooling, but the only thing that matters here is that counting free disk space is relatively slow). When i use dolphin to access a folder on the CIFS-mounted share, the disk free command gets called once for every subfolder in detail view or when you mouse-over so the information side bar is refreshed, even though it is the same filesystem and the information is never displayed to the user. I think this behaviour is a bug. This doesn't matter with a fast df command, as samba provides by default, but with a 0.1s execution time command (greyhole), it really slows navigating down. I agree for 100%. Polling any information on a slow connection has to be avoided. It is more useful to quickly browse the directories then having mimetype items and stuff appearing after 10 minutes for a large folder. If you want to hav information add a button "fetch info" or rightklick on file/dir to show properties. Don't even think about doing it in the backround as it will slow down the connection. This slows down my entire KDE desktop and occasionally freezes Dolphin. I can watch it SLOWLY populate file counts. This is also over a LAN, 1GB ethernet from my machine to the server, so it's not just slow connections. It doesn't usually crash Dolphin on lan. But it still slow. (In reply to comment #61) > I agree for 100%. Polling any information on a slow connection has to be > avoided. It is more useful to quickly browse the directories then having > mimetype items and stuff appearing after 10 minutes for a large folder. If > you want to hav information add a button "fetch info" or rightklick on > file/dir to show properties. Don't even think about doing it in the > backround as it will slow down the connection. > > This slows down my entire KDE desktop and occasionally freezes Dolphin. Yes, this looks like a regression in the new version of Dolphin. Will somebody read those comments or do we need to open a new one? Yes please, fix this. NO metadata loading at all on networked drives. It's pretty straightforward. The delays over a slow WAN are huge and not worth it. *** Bug 215953 has been marked as a duplicate of this bug. *** *** Bug 420168 has been marked as a duplicate of this bug. *** (In reply to Germano Massullo from comment #67) > *** Bug 420168 has been marked as a duplicate of this bug. *** Summary of my bugreport: 1 Gbit/s network connection to the NAS. I mount the remote folder with command $ sshfs -o cache=yes -o kernel_cache -o compression=no user@ip_address:/zpool_1/foo /tmp/folder Dolphin is very slow to populate view while browsing big folders. When I use GNOME Nautilus instead, is as fast as a local folder. I noticed that, after mounting the remote storage, if I browse the folder for the first time with Nautilus, and then with Dolphin, the latter will be fast as it should be Could you also test with krusader? (both dolphin and krusader use KIO). Alas, can't reproduce. - Are we all talking about sshfs mounts? - Are we all talking about standard local networks without tunnels and the like? - What version of kde frameworks does this happen with? - What version of dolphin does this happen with? - Is it also slow when you run `time kioclient5 ls 'file:/tmp/folder'` - What's the entire output of that kioclient5 command? - Are dolphin previews enabled? - What does the content of the directory that this happens with look like? How many files are there? What type of file are? Are they very large or small or just a mixed bag? How many subdirectories are there? How many files and what type of files do they contain on average? The greater detail the better. (In reply to Harald Sitter from comment #70) > Alas, can't reproduce. > > - Are we all talking about sshfs mounts? Myself, yes > - Are we all talking about standard local networks without tunnels and the > like? Myself SSHFS running on local area network > - What version of kde frameworks does this happen with? KDE Frameworks 5.70.0 > - What version of dolphin does this happen with? 19.12.2 > - Is it also slow when you run `time kioclient5 ls 'file:/tmp/folder'` FOLDER 1 real 0m0,254s user 0m0,069s sys 0m0,035s On Dolphin took almost a 1 minute to populate size column and be responsive again FOLDER 2 (a big subfolder of folder 1) real 0m0,136s user 0m0,070s sys 0m0,031s On Dolphin took almost a 1 minute to populate size column and be responsive again > - What's the entire output of that kioclient5 command? I cannot paste here the content of my folders > - Are dolphin previews enabled? No > - What does the content of the directory that this happens with look like? folder 1 and folder 2 contain .jpg .cr2 .nef .xmp .mp4 files > How many files are there? What type of file are? Are they very large or > small or just a mixed bag? How many subdirectories are there? How many files > and what type of files do they contain on average? The greater detail the > better. Dolphin folder 1 properties returns 38243 files, 807 subfolders. The average filesize is 10 MB **Also folder properties takes ages to retrieve the number of files and folders** Krusader seems to behave waaaaay better Created attachment 129379 [details] script to generate a large content tree 19.12.2 is no longer supported. Can anyone reproduce with 20.04.1 or git master? What's more, I can't reproduce this. I've setup a tree similar to the one described (gen_tree.rb attachment) with 38k files and 800 folders. I access it from a VM over a gigabit network with dolphin 20.04+kf5.70 in details view mode and it takes maybe a second or two to load the content. Both server and client use openssh 8.2. Without further information I'm not sure we can do anything here. Perhaps it's already fixed in 20.04, perhaps it's not got anything to do with KIO or dolphin. client-side command for the record: mkdir /tmp/folder; echo 1 | sudo tee -a /proc/sys/vm/drop_caches; sudo umount /tmp/folder; sshfs -o cache=yes -o kernel_cache -o compression=no me@bear.local:/srv/foo /tmp/folder && dolphin --new-window /tmp/folder/folder1 (In reply to Harald Sitter from comment #72) > Created attachment 129379 [details] > script to generate a large content tree > > 19.12.2 is no longer supported. > Can anyone reproduce with 20.04.1 or git master? I installed neon-unstable-20200614-1102.iso that ships Dolphin 20.07.70 KDE Frameworks 5.72.0 Qt 5.14.2 Krusader 2.8.0-dev GNOME Nautilus 3.26.4 I can still reproduce the problem with Dolphin, it's a bit more responsive than 19.12.2 but it still hangs. Krusader performs much better but not as good as Nautilus which is almost with no stuttering while browsing huge directories. Note that I rebooted the system everytime I had to test another file manager, because this problem happens only the first time you open the directory after a system boot Some outputs of time kioclient5 ls 'file:/tmp/folder' runned on some of the folders that trigger the problem real 0m0,222s user 0m0,087s sys 0m0,026s real 0m0,154s user 0m0,067s sys 0m0,040s real 0m0,172s user 0m0,084s sys 0m0,026s Does the server have a HDD or an SSD? While dolphin hangs what are the top CPU consumers in ksysguard? Also, while it hangs do you see network activity in ksysguard? ... and Does it make a difference if you disable the information panel on the right, or disable the Preview in there? (In reply to Harald Sitter from comment #74) > Does the server have a HDD or an SSD? > While dolphin hangs what are the top CPU consumers in ksysguard? > Also, while it hangs do you see network activity in ksysguard? Thanks to people of #zfsonlinux IRC channel I got a way to reliable reproduce the problem. On server # zpool export tank # zpool import tank clears the entire ZFS ARC cache. Disks are HDD in RAID-Z configuration. After each time I remount the SSHFS mount on the client. I have runned tests several times on both GNOME Nautilus and KDE Dolphin, running zpool export / import and what I found out is: both Nautilus and Dolphin need much time to populate the "Size" column, and both have network usage during this activity, but: - Nautilus does not hang and the user can browse folder without being stuck - Dolphin starts to lag and the user interface becomes stuck. During this period of time, client and server are communicating and the network activity is in the order of ~tens of kB/s. So I really think it is a matter of how Dolphin threads are handled (In reply to Harald Sitter from comment #75) > ... and Does it make a difference if you disable the information panel on > the right, or disable the Preview in there? I never used preview and information panel on the right. Concerning Krusader, in my opinion it is way smoother than Dolphin because it does not calculate size of subfolders because you can only see <DIR> entry on Size column. The size is calculated only for files in the current path, not for subfolders I am still not able to reproduce this :/ Are you 100% confident that previews aren't enabled? Do you know how to use gdb so we can verify that? For good measure I did try with previews and I can kind of reproduce the intermittent lockups, but only when previews are enabled. I'm starving IO responsiveness by setting a cap on read requests per second `IOReadIOPSMax='/dev/sdc 4'` of the ssh slice and that then indeed causes dolphin to get choppy as it is waiting. What seems to happen there is that a mimetype query is issued before starting a PreviewJob [1] and that is a read operation unless the KFileItem was explicitly told not to look at the content [2]. Without previews nor the info sidebar I can even limit IO per second to 1 and dolphin will stay responsive. So, something still doesn't add up here. [1] https://invent.kde.org/system/dolphin/-/blob/master/src/kitemviews/kfileitemmodelrolesupdater.cpp#L905 [2] https://invent.kde.org/frameworks/kio/-/blob/master/src/core/kfileitem.cpp#L506 Created attachment 129705 [details] gwenview multiple files "move to" (In reply to Harald Sitter from comment #77) > Are you 100% confident that previews aren't enabled? Yes and moreover I found out that this happens also on Gwenview when: 1. I select **more than one picture** pictures, then rightclick "Move to" 2. on the new move to window, there is a path textbox. As soon I start to write the path that is in the remote SSHFS mount, Gwenvew completely freezes and I can see the same network activity between the server and the client. > Do you know how to use gdb so we can verify that? I know how to use it, but I would like to receive detailed step by step procedure so I can better provide all informations you may need Actually on second thought we'll not need gdb at all - close all running instances of dolphin - open ksysguard - search for 'thumb' - if thumbnail or thumbnail.so or anything similar is running make sure to terminate it before proceeding - once it is closed, run - KDE_FORK_SLAVES=1 dolphin --new-window /the/path/of/our/mount - scroll around a bit - check ksysguard again - if thumbnail.so is running (again) then previews are being generated somehow should thumbnail.so not get started again then previews are disabled otherwise they somehow are enabled (in which case we know its the previewing starving the IO). Dear Bug Submitter, This bug has been in NEEDSINFO status with no change for at least 15 days. Please provide the requested information as soon as possible and set the bug status as REPORTED. Due to regular bug tracker maintenance, if the bug is still in NEEDSINFO status with no change in 30 days the bug will be closed as RESOLVED > WORKSFORME due to lack of needed information. For more information about our bug triaging procedures please read the wiki located here: https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging If you have already provided the requested information, please mark the bug as REPORTED so that the KDE team knows that the bug is ready to be confirmed. Thank you for helping us make KDE software even better for everyone! This bug has been in NEEDSINFO status with no change for at least 30 days. The bug is now closed as RESOLVED > WORKSFORME due to lack of needed information. For more information about our bug triaging procedures please read the wiki located here: https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging Thank you for helping us make KDE software even better for everyone! (In reply to Harald Sitter from comment #79) > Actually on second thought we'll not need gdb at all > > - close all running instances of dolphin > - open ksysguard > - search for 'thumb' > - if thumbnail or thumbnail.so or anything similar is running make sure to > terminate it before proceeding > - once it is closed, run > - KDE_FORK_SLAVES=1 dolphin --new-window /the/path/of/our/mount > - scroll around a bit > - check ksysguard again > - if thumbnail.so is running (again) then previews are being generated > somehow > > should thumbnail.so not get started again then previews are disabled > otherwise they somehow are enabled (in which case we know its the previewing > starving the IO). I followed the procedure and no thumb* process has ever appeared on processes list I have just updated to 5.75 and things seemed to be improved, but I cannot do much tests due this Dolphin crash bug -> https://bugs.kde.org/show_bug.cgi?id=427118 (In reply to Germano Massullo from comment #83) > I have just updated to 5.75 and things seemed to be improved, but I cannot > do much tests due this Dolphin crash bug -> > https://bugs.kde.org/show_bug.cgi?id=427118 After days of testing I can say that the problem still exists Hi, it's a long time since I've looked at the problem. What has been changed since then? I saw that there is not a seperation between the default attributes like size, permissions and owner/group and c/mtimes, and more in depth information like mimetype, which require much more io. What I beleive I've mentioned before is that these lookup processes should be handled seperated: so some threads do the lookup of default attributes, and others do the determing of the mimetypes/icon etc. This can be done with different queues of "lookup" tasks, and when doing the lookups of mimetype.icon takes too much time, it can switch over to do a much more simple lookup by checking the extension for example. That saves a lot of io. But I've seen the code and it's horrible to do this imo. Stef (In reply to Stef Bon from comment #85) > Hi, > > it's a long time since I've looked at the problem. > What has been changed since then? I saw that there is not a seperation > between the default attributes like size, permissions and owner/group and > c/mtimes, and more in depth information like mimetype, which require much > more io. > What I beleive I've mentioned before is that these lookup processes should > be handled seperated: so some threads do the lookup of default attributes, > and others do the determing of the mimetypes/icon etc. > This can be done with different queues of "lookup" tasks, and when doing the > lookups of mimetype.icon takes too much time, it can switch over to do a > much more simple lookup by checking the extension for example. That saves a > lot of io. But I've seen the code and it's horrible to do this imo. > Stef Also, I think that an option to disable determing mimetypes/icon for remote mounts should be added. Imagine a mounted device that runs over a slow network I agree. That should be an option. So give the user a choice: lookups of mime/icon should be one of: - do always, no matter what - do heuristic, when too slow do either: - switch over to a simple determing of mime by looking at extension (in stead of the first x bytes and anlyzing them) - disable - never do on network filesystems (which are?) Stef Now plans enough. We should start working on this issue, and then I mean really. Not only posting, but really start. I can write a bit c++, and think that the code is a mess, but maybe we can do a bit of a ceanup as well. What about that? Stef Serious. Germano and others. This has taken too far long. We should team up and try to solve this issue. I'm a writer of FUSE filesystems, now busy writing a ssh server, to provide a more flexible server than openssh one for fuse clients and io sollutions I also work on. My speciallity is C, a bit of C++ (although I prefer C very much), FUSE, SSH, SFTP and network filesystems. Can someone help me to address this issue? Stef Bon PS1 I think it's important to first analyze the problem thorough before doing anything, and then plan the action and keep coordinated (In reply to Stef Bon from comment #89) > Serious. > Germano and others. This has taken too far long. > We should team up and try to solve this issue. I volunteer as a tester but at the moment I don't have time to study from the scratch all KDE Framework infrastructure Ok Germano, better be honoust. I respect that. But I want some help. For example where do I start to look for? I do not have to look at the classes for the ui for example. I can remember: - dolpinpart.cpp - kitemviews/{kfileitemmodel.cpp, kstandarditemmodel.cpp} I believe. Can someone point me a bit in the right direction (to find out why it takes too long for certain (network) filesystems == places where there is a lot of io for determing the mimetype). If you can point me also to other sources like designnotes, important discussions etc please let me know. And where can I write and share my idea's. That's not here. Is there a developerscorner or something simular for dolphin? Thanks in advance, Stef Bon the Netherlands One would first need to understand why it blocks and for that you need to catch it in the act, as it were. When it locks up grab a backtrace with gdb and that should tell you where you need to look in the code. Alternatively you could try finding the blocking code as a hot spot via perf or hotspot or callgrind, but I'm not entirely certain you'll see it in the sea of otherwise unrelated but also expensive code paths. That being said, in my investigation I've found numerous blocking paths and isolated them into standalone bug reports (they are all linked at the top in the see also section) so I'd encourge you to check them out lest you try to track down an already known problem. The unfortunate thing is that as I recall Germano's description of his dolphin setup wouldn't hit any of the code paths I've found as they all had to do with either the dolphin side bar or thumbnails. Simply put the only thing his dolphin actually does is stat urls for the dolphin file view and every once in a while update the free disk space info. Neither should be so slow as to cause micro blocking. I know I have to see/eperience why it is running slow. I'm not using gdb to do run code, just to analyze a coredump. The way I follow code is adding extra logmessages to syslog which always give me enough info, well at least till now. The info you point at is usefull Harald (the see-alsos). Do you have also some information how the interaction is with the kio-slaves? (As developer of FUSE filesystems I have something against these. It is far better to leave looking up of fileinfo to the kernel and a FUSE service. Then Dolphin can concentrate more on the way it presents the fileinfo. But it's the way it is.) And is there a developerscorner where I can write down everything I see/discover/find/my opinion and discus? Stef Bon (In reply to Stef Bon from comment #93) > Do you have also some information how the interaction is with the kio-slaves? That really depends on the actual code path that is slow. KIO may not even be involved, in fact if KIO slaves were involved we'd not be having this problem as they are all sporting async API by design. It's skipping KIO that is usually causing performance troubles. Notably there are problems with certain bits of code assuming that "local URI == fast" and consequently use sync POSIX API in the GUI thread e.g. https://bugs.kde.org/show_bug.cgi?id=423502 Pretty much all the bugs in the see also section are instances of that. > And is there a developerscorner where I can write down everything I > see/discover/find/my opinion and discus? https://invent.kde.org/system/dolphin/-/issues respectively https://invent.kde.org/frameworks/kio/-/issues is where we keep notes/todos. For discussion it's often times best to hop on freenode #kde-devel or just tag the relevant people in the issues on invent. I attach this Dolphin GDB stacktrace upon suggestion of Elvis Angelaccio. It has been taken while Dolphin (or KIO) was slowing down during directory reading $ gdb dolphin Reading symbols from dolphin... Reading symbols from /usr/lib/debug/usr/bin/dolphin-21.04.2-1.fc34.x86_64.debug... (gdb) run Starting program: /usr/bin/dolphin [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". [New Thread 0x7fffe3619640 (LWP 317541)] [New Thread 0x7fffe29f5640 (LWP 317542)] kf.xmlgui: KActionCollection::setComponentName does not work on a KActionCollection containing actions! "dolphin" Dwarf Error: Cannot not find DIE at 0xfdec [from module /usr/lib/debug/usr/lib64/libjxl.so.0.3.7-0.3.7-3.fc34.x86_64.debug] [New Thread 0x7fffd8c4d640 (LWP 317543)] [New Thread 0x7fffcdb72640 (LWP 317544)] [New Thread 0x7fffcd371640 (LWP 317545)] [New Thread 0x7fffccb70640 (LWP 317546)] [New Thread 0x7fffbffff640 (LWP 317547)] [New Thread 0x7fffbf7fe640 (LWP 317548)] [New Thread 0x7fffbeffd640 (LWP 317549)] [New Thread 0x7fffbe7fc640 (LWP 317550)] [New Thread 0x7fffbdffb640 (LWP 317551)] [New Thread 0x7fffbd7fa640 (LWP 317552)] [New Thread 0x7fffbcff9640 (LWP 317553)] [New Thread 0x7fff9bfff640 (LWP 317554)] [New Thread 0x7fff9b7fe640 (LWP 317555)] [New Thread 0x7fff9affd640 (LWP 317556)] [New Thread 0x7fff9a7fc640 (LWP 317557)] [New Thread 0x7fff99ffb640 (LWP 317558)] [New Thread 0x7fff997fa640 (LWP 317559)] [New Thread 0x7fff98ff9640 (LWP 317560)] [New Thread 0x7fff7bfff640 (LWP 317561)] [New Thread 0x7fff7b7fe640 (LWP 317562)] [New Thread 0x7fff7affd640 (LWP 317563)] [New Thread 0x7fff7a7fc640 (LWP 317564)] [New Thread 0x7fff79ffb640 (LWP 317565)] ^C Thread 1 "dolphin" received signal SIGINT, Interrupt. 0x00007ffff7dbf5bf in __GI___poll (fds=0x555555fc9220, nfds=11, timeout=14523) at ../sysdeps/unix/sysv/linux/poll.c:29 --Type <RET> for more, q to quit, c to continue without paging-- 29 return SYSCALL_CANCEL (poll, fds, nfds, timeout); (gdb) set height 0 (gdb) et print elements 0 Undefined command: "et". Try "help". (gdb) set print elements 0 (gdb) set print frame-arguments all (gdb) thread apply all backtrace Thread 26 (Thread 0x7fff79ffb640 (LWP 317565) "QThread"): #0 __GI___getdents64 (fd=34, buf=buf@entry=0x7fff680154b0, nbytes=<optimized out>) at ../sysdeps/unix/sysv/linux/getdents64.c:32 #1 0x00007ffff7d929ed in __GI___readdir64 (dirp=0x7fff68015480) at ../sysdeps/unix/sysv/linux/readdir64.c:51 #2 0x00007ffff7c87b08 in walkDir(QString const&, bool, bool, dirent64*, unsigned int) [clone .constprop.0] (dirPath=@0x7fff79ffa5c8: {d = 0x7fff68004350}, countHiddenFiles=countHiddenFiles@entry=false, countDirectoriesOnly=countDirectoriesOnly@entry=false, allowedRecursiveLevel=allowedRecursiveLevel@entry=0, dirEntry=<optimized out>) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:41 #3 0x00007ffff7c87df5 in walkDir(QString const&, bool, bool, dirent64*, unsigned int) [clone .constprop.0] (dirPath=@0x555556293600: {d = 0x5555565eed50}, countHiddenFiles=countHiddenFiles@entry=false, countDirectoriesOnly=countDirectoriesOnly@entry=false, allowedRecursiveLevel=1, dirEntry=<optimized out>) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:80 #4 0x00007ffff7c4b67e in KDirectoryContentsCounterWorker::subItemsCount (options=<optimized out>, path=@0x555556293600: {d = 0x5555565eed50}) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:113 #5 KDirectoryContentsCounterWorker::countDirectoryContents (this=0x555555d27d30, path=@0x555556293600: {d = 0x5555565eed50}, options={i = 0}) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:121 #6 0x00007ffff5d4e809 in QObject::event (this=0x555555d27d30, e=0x55555663dbe0) at kernel/qobject.cpp:1314 #7 0x00007ffff68c1423 in QApplicationPrivate::notify_helper (this=<optimized out>, receiver=0x555555d27d30, e=0x55555663dbe0) at kernel/qapplication.cpp:3632 #8 0x00007ffff5d24098 in QCoreApplication::notifyInternal2 (receiver=0x555555d27d30, event=0x55555663dbe0) at kernel/qcoreapplication.cpp:1063 #9 0x00007ffff5d27606 in QCoreApplicationPrivate::sendPostedEvents (receiver=0x0, event_type=0, data=0x5555559837c0) at kernel/qcoreapplication.cpp:1817 #10 0x00007ffff5d75bf7 in postEventSourceDispatch (s=0x7fff68004fe0) at kernel/qeventdispatcher_glib.cpp:277 #11 0x00007ffff3c3f4cf in g_main_dispatch (context=0x7fff68000c20) at ../glib/gmain.c:3337 #12 g_main_context_dispatch (context=0x7fff68000c20) at ../glib/gmain.c:4055 #13 0x00007ffff3c934e8 in g_main_context_iterate.constprop.0 (context=context@entry=0x7fff68000c20, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4131 #14 0x00007ffff3c3cc03 in g_main_context_iteration (context=0x7fff68000c20, may_block=1) at ../glib/gmain.c:4196 #15 0x00007ffff5d75698 in QEventDispatcherGlib::processEvents (this=0x7fff68000b60, flags=<optimized out>) at kernel/qeventdispatcher_glib.cpp:423 #16 0x00007ffff5d22ab2 in QEventLoop::exec (this=this@entry=0x7fff79ffab90, flags=<optimized out>, flags@entry={i = 0}) at ../../include/QtCore/../../src/corelib/global/qflags.h:69 #17 0x00007ffff5b6625a in QThread::exec (this=<optimized out>) at ../../include/QtCore/../../src/corelib/global/qflags.h:121 #18 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x555555d2cb40) at thread/qthread_unix.cpp:329 #19 0x00007ffff45eb299 in start_thread (arg=0x7fff79ffb640) at pthread_create.c:481 #20 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 25 (Thread 0x7fff7a7fc640 (LWP 317564) "Thread (pooled)"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555bca664, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7a7fbae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555bca664, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7a7fbae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7a7fbae0, clockid=1, mutex=0x555555bca610, cond=0x555555bca638) at pthread_cond_wait.c:504 #3 __pthread_cond_timedwait (cond=0x555555bca638, mutex=0x555555bca610, abstime=0x7fff7a7fbae0) at pthread_cond_wait.c:637 #4 0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 577900995, type = <optimized out>}, this=0x555555bca610) at thread/qwaitcondition_unix.cpp:136 #5 QWaitConditionPrivate::wait (deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, this=0x555555bca610) at thread/qwaitcondition_unix.cpp:144 #6 QWaitCondition::wait (this=this@entry=0x5555558e80c0, mutex=0x555555845348, deadline={t1 = 497849, t2 = 577900995, type = 1}) at thread/qwaitcondition_unix.cpp:225 #7 0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x5555558e80b0) at thread/qthreadpool.cpp:140 #8 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x5555558e80b0) at thread/qthread_unix.cpp:329 #9 0x00007ffff45eb299 in start_thread (arg=0x7fff7a7fc640) at pthread_create.c:481 #10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 24 (Thread 0x7fff7affd640 (LWP 317563) "Thread (pooled)"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555558baeb0, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7affcae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555558baeb0, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7affcae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7affcae0, clockid=1, mutex=0x5555558bae60, cond=0x5555558bae88) at pthread_cond_wait.c:504 #3 __pthread_cond_timedwait (cond=0x5555558bae88, mutex=0x5555558bae60, abstime=0x7fff7affcae0) at pthread_cond_wait.c:637 #4 0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 474781028, type = <optimized out>}, this=0x5555558bae60) at thread/qwaitcondition_unix.cpp:136 #5 QWaitConditionPrivate::wait (deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, this=0x5555558bae60) at thread/qwaitcondition_unix.cpp:144 #6 QWaitCondition::wait (this=this@entry=0x5555558e8230, mutex=0x555555845348, deadline={t1 = 497857, t2 = 474781028, type = 1}) at thread/qwaitcondition_unix.cpp:225 #7 0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x5555558e8220) at thread/qthreadpool.cpp:140 #8 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x5555558e8220) at thread/qthread_unix.cpp:329 #9 0x00007ffff45eb299 in start_thread (arg=0x7fff7affd640) at pthread_create.c:481 #10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 23 (Thread 0x7fff7b7fe640 (LWP 317562) "Thread (pooled)"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555558e8534, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7b7fdae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555558e8534, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7b7fdae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7b7fdae0, clockid=1, mutex=0x5555558e84e0, cond=0x5555558e8508) at pthread_cond_wait.c:504 #3 __pthread_cond_timedwait (cond=0x5555558e8508, mutex=0x5555558e84e0, abstime=0x7fff7b7fdae0) at pthread_cond_wait.c:637 #4 0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 577762414, type = <optimized out>}, this=0x5555558e84e0) at thread/qwaitcondition_unix.cpp:136 #5 QWaitConditionPrivate::wait (deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, this=0x5555558e84e0) at thread/qwaitcondition_unix.cpp:144 #6 QWaitCondition::wait (this=this@entry=0x555555bca500, mutex=0x555555845348, deadline={t1 = 497849, t2 = 577762414, type = 1}) at thread/qwaitcondition_unix.cpp:225 #7 0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x555555bca4f0) at thread/qthreadpool.cpp:140 #8 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x555555bca4f0) at thread/qthread_unix.cpp:329 #9 0x00007ffff45eb299 in start_thread (arg=0x7fff7b7fe640) at pthread_create.c:481 #10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 22 (Thread 0x7fff7bfff640 (LWP 317561) "Thread (pooled)"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555558c4340, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7bffeae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555558c4340, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7bffeae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7bffeae0, clockid=1, mutex=0x5555558c42f0, cond=0x5555558c4318) at pthread_cond_wait.c:504 #3 __pthread_cond_timedwait (cond=0x5555558c4318, mutex=0x5555558c42f0, abstime=0x7fff7bffeae0) at pthread_cond_wait.c:637 #4 0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 478540456, type = <optimized out>}, this=0x5555558c42f0) at thread/qwaitcondition_unix.cpp:136 #5 QWaitConditionPrivate::wait (deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, this=0x5555558c42f0) at thread/qwaitcondition_unix.cpp:144 #6 QWaitCondition::wait (this=this@entry=0x555555bb6c60, mutex=0x555555845348, deadline={t1 = 497857, t2 = 478540456, type = 1}) at thread/qwaitcondition_unix.cpp:225 #7 0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x555555bb6c50) at thread/qthreadpool.cpp:140 #8 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x555555bb6c50) at thread/qthread_unix.cpp:329 #9 0x00007ffff45eb299 in start_thread (arg=0x7fff7bfff640) at pthread_create.c:481 #10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 21 (Thread 0x7fff98ff9640 (LWP 317560) "dolphin:shlo3"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2e120) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fff98ff9640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 20 (Thread 0x7fff997fa640 (LWP 317559) "dolphin:shlo2"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2dec0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fff997fa640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 19 (Thread 0x7fff99ffb640 (LWP 317558) "dolphin:shlo1"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2dc60) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fff99ffb640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 18 (Thread 0x7fff9a7fc640 (LWP 317557) "dolphin:shlo0"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a4a0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fff9a7fc640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 17 (Thread 0x7fff9affd640 (LWP 317556) "dolphin:sh8"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a430) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fff9affd640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 16 (Thread 0x7fff9b7fe640 (LWP 317555) "dolphin:sh7"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a3f0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fff9b7fe640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 15 (Thread 0x7fff9bfff640 (LWP 317554) "dolphin:sh6"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a3b0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fff9bfff640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 14 (Thread 0x7fffbcff9640 (LWP 317553) "dolphin:sh5"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a370) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffbcff9640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 13 (Thread 0x7fffbd7fa640 (LWP 317552) "dolphin:sh4"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a330) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffbd7fa640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 12 (Thread 0x7fffbdffb640 (LWP 317551) "dolphin:sh3"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a2f0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffbdffb640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 11 (Thread 0x7fffbe7fc640 (LWP 317550) "dolphin:sh2"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c257d0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffbe7fc640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 10 (Thread 0x7fffbeffd640 (LWP 317549) "dolphin:sh1"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c2a2b0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffbeffd640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 9 (Thread 0x7fffbf7fe640 (LWP 317548) "dolphin:sh0"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c25790) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffbf7fe640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 8 (Thread 0x7fffbffff640 (LWP 317547) "dolphin:disk$3"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c279e0) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffbffff640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 7 (Thread 0x7fffccb70640 (LWP 317546) "dolphin:disk$2"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c27900) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffccb70640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 6 (Thread 0x7fffcd371640 (LWP 317545) "dolphin:disk$1"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c27880) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffcd371640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 5 (Thread 0x7fffcdb72640 (LWP 317544) "dolphin:disk$0"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c27940) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffcdb72640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 4 (Thread 0x7fffd8c4d640 (LWP 317543) "dolphin:cs0"): #0 0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c26a70, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 #1 0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c26a70, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123 #2 0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c26a20, cond=0x555555c26a48) at pthread_cond_wait.c:504 #3 __pthread_cond_wait (cond=0x555555c26a48, mutex=0x555555c26a20) at pthread_cond_wait.c:619 #4 0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c26a20, cond=0x555555c26a48) at ../include/c11/threads_posix.h:155 #5 util_queue_thread_func (input=input@entry=0x555555c25820) at ../src/util/u_queue.c:294 #6 0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffd8c4d640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 3 (Thread 0x7fffe29f5640 (LWP 317542) "QDBusConnection"): #0 0x00007ffff7dbf5bf in __GI___poll (fds=0x7fffd40154b0, nfds=4, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 #1 0x00007ffff3c9347c in g_main_context_poll (priority=<optimized out>, n_fds=4, fds=0x7fffd40154b0, timeout=<optimized out>, context=0x7fffd4000c20) at ../glib/gmain.c:4434 #2 g_main_context_iterate.constprop.0 (context=context@entry=0x7fffd4000c20, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126 #3 0x00007ffff3c3cc03 in g_main_context_iteration (context=0x7fffd4000c20, may_block=1) at ../glib/gmain.c:4196 #4 0x00007ffff5d75698 in QEventDispatcherGlib::processEvents (this=0x7fffd4000b60, flags=<optimized out>) at kernel/qeventdispatcher_glib.cpp:423 #5 0x00007ffff5d22ab2 in QEventLoop::exec (this=this@entry=0x7fffe29f4b60, flags=<optimized out>, flags@entry={i = 0}) at ../../include/QtCore/../../src/corelib/global/qflags.h:69 #6 0x00007ffff5b6625a in QThread::exec (this=this@entry=0x7ffff609d060 <(anonymous namespace)::Q_QGS__q_manager::innerFunction()::holder>) at ../../include/QtCore/../../src/corelib/global/qflags.h:121 #7 0x00007ffff6022b6b in QDBusConnectionManager::run (this=0x7ffff609d060 <(anonymous namespace)::Q_QGS__q_manager::innerFunction()::holder>) at qdbusconnection.cpp:179 #8 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x7ffff609d060 <(anonymous namespace)::Q_QGS__q_manager::innerFunction()::holder>) at thread/qthread_unix.cpp:329 #9 0x00007ffff45eb299 in start_thread (arg=0x7fffe29f5640) at pthread_create.c:481 #10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 2 (Thread 0x7fffe3619640 (LWP 317541) "QXcbEventQueue"): #0 0x00007ffff7dbf5bf in __GI___poll (fds=fds@entry=0x7fffe3618a88, nfds=nfds@entry=1, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 #1 0x00007ffff4615f42 in poll (__timeout=-1, __nfds=1, __fds=0x7fffe3618a88) at /usr/include/bits/poll2.h:47 #2 _xcb_conn_wait (c=0x55555558f530, vector=0x0, count=0x0, cond=<optimized out>) at /usr/src/debug/libxcb-1.13.1-7.fc34.x86_64/src/xcb_conn.c:479 #3 0x00007ffff46178fc in _xcb_conn_wait (count=0x0, vector=0x0, cond=0x55555558f570, c=0x55555558f530) at /usr/src/debug/libxcb-1.13.1-7.fc34.x86_64/src/xcb_conn.c:445 #4 xcb_wait_for_event (c=0x55555558f530) at /usr/src/debug/libxcb-1.13.1-7.fc34.x86_64/src/xcb_in.c:697 #5 0x00007fffe37210f7 in QXcbEventQueue::run (this=0x5555555a4020) at qxcbeventqueue.cpp:228 #6 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x5555555a4020) at thread/qthread_unix.cpp:329 #7 0x00007ffff45eb299 in start_thread (arg=0x7fffe3619640) at pthread_create.c:481 #8 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 Thread 1 (Thread 0x7ffff0d5c980 (LWP 317537) "dolphin"): #0 0x00007ffff7dbf5bf in __GI___poll (fds=0x555555fc9220, nfds=11, timeout=14523) at ../sysdeps/unix/sysv/linux/poll.c:29 #1 0x00007ffff3c9347c in g_main_context_poll (priority=<optimized out>, n_fds=11, fds=0x555555fc9220, timeout=<optimized out>, context=0x7fffdc005000) at ../glib/gmain.c:4434 #2 g_main_context_iterate.constprop.0 (context=context@entry=0x7fffdc005000, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126 #3 0x00007ffff3c3cc03 in g_main_context_iteration (context=0x7fffdc005000, may_block=1) at ../glib/gmain.c:4196 #4 0x00007ffff5d75698 in QEventDispatcherGlib::processEvents (this=0x55555565e1a0, flags=<optimized out>) at kernel/qeventdispatcher_glib.cpp:423 #5 0x00007ffff5d22ab2 in QEventLoop::exec (this=this@entry=0x7fffffffd680, flags=<optimized out>, flags@entry={i = 0}) at ../../include/QtCore/../../src/corelib/global/qflags.h:69 #6 0x00007ffff5d2afe4 in QCoreApplication::exec () at ../../include/QtCore/../../src/corelib/global/qflags.h:121 #7 0x00007ffff6233c60 in QGuiApplication::exec () at kernel/qguiapplication.cpp:1860 #8 0x00007ffff68c1399 in QApplication::exec () at kernel/qapplication.cpp:2824 #9 0x00007ffff7ef119e in kdemain (argc=<optimized out>, argv=<optimized out>) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/main.cpp:222 #10 0x00007ffff7cf1b75 in __libc_start_main (main=0x555555555070 <main(int, char**)>, argc=1, argv=0x7fffffffd958, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffd948) at ../csu/libc-start.c:332 #11 0x00005555555550ae in _start () I'm pretty sure you've not caught it being busy right there. The GUI thread is entirely idle. It's curious though, the walker thread is running... What happens if you remove the Size column from the view and restart dolphin? Does that make things more responsive? If so, does tweaking the folder size display setting in dolphin's View Modes settings make any difference in performance? Also, what have you configured there for the size display? Assuming the walker thread being active isn't a fluke my theory would be that either your network or the server can't cope with the load the size walking causes in addition to the regular activity for listing items (which would likely have some bogus blocking IO calls on the GUI thread due to this being a local path as mentioned in Comment #94) Created attachment 140022 [details]
dolphin gdb_1
Created attachment 140023 [details]
dolphin gdb_2
Created attachment 140024 [details]
dolphin gdb_3
Please note the questions I've asked. I have VERY INTERESTING NEWS (see bottom part of the message) (In reply to Harald Sitter from comment #96) > I'm pretty sure you've not caught it being busy right there. The GUI thread > is entirely idle. I think it's the client system triggering too much I/O on the server because it tries to retrieve as much data as possible from the remote folders. This is not happening when using Krusader > What happens if you remove the Size column from the view and restart dolphin? Does that make things more responsive? I am still able to reproduce a situation where Dolphin is stuck and the server is loaded with a lot of I/O, so the anwer is "no, disabling size column does not make things more responsive" > If so, does tweaking the folder size display setting in dolphin's View Modes settings make any difference in performance? Also, what have you configured there for the size display? The size is already the smallest one and the view is "detailed view" BUT!! if instead I set Dolphin to use "Icon" view, the problem is no longer happening! The folders that used to "stuck" Dolphin are no longer causing troubles, the folder browsing is very smooth and also I don't see any massive I/O on the storage server To make sure to be able to reliable reproduce/not reproduce the problem, in order to clear all caches, before each test I have ran following commands: on the client: $ fusermount -u mount_point on the storage server: # zpool export pool_name # zpool import -d /dev/disk/by-id pool_name on the client: $ sshfs username@ip_address:/pool_name/dataset mount_point I would reference also the bug #454722 - Dolphin becomes frozen if nfs shares declared in fstab are not available - previously the mount point was just showing empty. Currently reproduced with Dolphin 23.08.2. (In reply to Germano Massullo from comment #101) > I think it's the client system triggering too much I/O on the server because > it tries to retrieve as much data as possible from the remote folders. This > is not happening when using Krusader Krusader is still not immune to such issues as others mentioned earlier already, it just gets significantly less information compared to Dolphin with default configuration which even goes into subdirectories, so it can be really excessive. (In reply to Harald Sitter from comment #70) > Alas, can't reproduce. The key which was mentioned here already is high latency, that's a significant problem elsewhere too in KDE, mostly because: - A whole lot of I/O operations are done one by one and with high latency that becomes really obvious. A simple example of that problem is observing the SFTP KIO slaving dealing with a directory with symbolic links over a high latency connection where an strace on sshd can show the stat(x) calls being issued rather slowly with the latency penalty. - Apparently there's no progressive file listing, and getting information does block the GUI Theoretically this doesn't even really need networking to reproduce, it's just easier with that as it adds more latency and I suspect that no helpful I/O scheduler can get in the way of getting high latency. Currently I can experience this with high latency not caused by the network, but by accessing an HDD over NFS which is under heavy load not by just the test host which definitely makes it worse, but hammering it just with one host already makes the experience bad. Do note that caching definitely gets in the way of reproducing the issue, so I'll address that. Given the previously mentioned conditions, looking at a directory with 30k+ files where new files are slowly being created. Didn't measure first listing attempt, but likely that's not the best anyway, so let's assume a hot cache which gets the following experiences: - `ls -la`: <1 s, reasonably fast - Krusader: <2 s, still pretty decent although with the files on the top not changing, and starting scrolling starts to make the UI unresponsive. One large scroll with the mouse, and it's just gone for some time, although still just for seconds. - Dolphin: ? s. At one point it starts showing the files, but due to the occasional creation of new files it never becomes usable, although it does show changes occasionally There should be quite a few ways of reproducing even with let's say a local HDD being scrubbed, or worse, defragmented while testing. The tricky part is that without file changes various caching strategies and even the I/O scheduler is likely to get in the way, but with file changes other bugs may be at play too: - At least with Krusader the tracking of directory contents tend to fall apart mostly after heavy I/O until reboot. This most commonly affects NFS mounts for me, but happened multiple times already after handling directories with a ton of files. What I tend to notice is not all deleted files disappearing from the list. Not sure how it is to this issue, but mentioning as it may matter. - Quite rare, but I just recently happened to have gam_server pegging a core, Krusader staying unresponsive until gam_server got killed. I'm not really familiar with Gamin, I'm not even sure if it's actually needed or I'd be better off removing it as it's apparently optional, but reading around, it seems to be a troublemaker for others too which could mess with testing. Navigating through CIFS (kernel mode; smb4k) mounted directory containing a large number of files is extremely slow (for over 500 files it can hang for minutes). However, navigating the same folder through KIO address ( smb://<server>/<path> ) is much faster (5 seconds). I can confirm the original issue still persists in KDE 5.92.0 (plasma 5.24.7), dophin version: 21.12.3. Navigating remote files in Doplhin is really slow (up to 10 seconds), changing view to icon mode does not help at all. Removing some columns from the detailed view also does not make any difference. |