Bug 178678 - Navigating mounted network locations is extremely slow in Dolphin compared to command line
Summary: Navigating mounted network locations is extremely slow in Dolphin compared to...
Status: CONFIRMED
Alias: None
Product: frameworks-kio
Classification: Frameworks and Libraries
Component: general (show other bugs)
Version: 5.111.0
Platform: unspecified Linux
: NOR normal
Target Milestone: ---
Assignee: KIO Bugs
URL:
Keywords: efficiency, usability
: 215953 420168 (view as bug list)
Depends on:
Blocks: 290666 290680 291138
  Show dependency treegraph
 
Reported: 2008-12-24 16:26 UTC by doc.evans
Modified: 2024-03-07 15:02 UTC (History)
46 users (show)

See Also:
Latest Commit:
Version Fixed In:


Attachments
fs requests of krusader reading / (36.46 KB, text/plain)
2011-04-22 19:56 UTC, Nikita Melnichenko
Details
fs requests of mc reading / (8.70 KB, text/plain)
2011-04-22 19:56 UTC, Nikita Melnichenko
Details
fs requests of ls reading / (7.08 KB, text/plain)
2011-04-22 19:56 UTC, Nikita Melnichenko
Details
script to generate a large content tree (559 bytes, application/x-ruby)
2020-06-15 11:07 UTC, Harald Sitter
Details
gwenview multiple files "move to" (72.06 KB, image/jpeg)
2020-06-26 17:35 UTC, Germano Massullo
Details
dolphin gdb_1 (40.60 KB, text/plain)
2021-07-13 11:58 UTC, Germano Massullo
Details
dolphin gdb_2 (36.05 KB, text/plain)
2021-07-13 11:58 UTC, Germano Massullo
Details
dolphin gdb_3 (83.92 KB, text/plain)
2021-07-13 11:59 UTC, Germano Massullo
Details

Note You need to log in before you can comment on or make changes to this bug.
Description doc.evans 2008-12-24 16:26:36 UTC
Version:            (using KDE 4.1.3)
OS:                Linux
Installed from:    Ubuntu Packages

If one mounts a large file system over the Internet using sshfs, only slight delays are visible when navigating the file hierarchy from the command line. However, if one tries to navigate through the directories with dolphin or konqueror, one can wait tens of seconds for a directory to display; re-paints are also slow, as is simply using the scroll wheel to page down the entries in a large directory.

On the system I have been using for testing this bug, it takes approximately 1.5 seconds to perform an "ls -al" from the command line on a particular sshfs-mounted directory (which contains 209 items). Viewing the same directory with dolphin and konqueror takes about 80 seconds before control returns to the user.

The end result is that the KDE tools are simply unusable in this situation, and one is forced to use the command line.

I am using a cable broadband connection.
Comment 1 doc.evans 2008-12-25 04:40:48 UTC
I truly hate to second-guess developers, but are you absolutely certain that this is a kio problem?

It's not at all obvious to me why it would be.

(The answer "Yes" will satisfy me that you have thought about it and concluded that indeed it is; I just want to be sure that you aren't confusing this with something to do with the fish kio_slave.)

Comment 2 Thorsten Hirsch 2010-03-24 00:51:01 UTC
I can confirm this bug. It's still present in KDE 4.4.1.
Whenever I navigate a directory up- or downwards dolphin says "Connecting" and "Initiating protocol" in the status bar. So the ssh session is obviously not being kept open, but initiated from start for every directory change. That's what makes navigating so slow.
Please implement sessions for ssh (fish) in kio or dolphin and konqueror!
Comment 3 tony 2010-04-09 13:00:50 UTC
4.4.2 here.

While i can confirm the slowness on sshfs filesystem (as well as cifs/smbfs), i don't think this is related to kio.
It's just that dolphin seems to catch a lot of info out of files/directories... syncronously, blocking the gui.
It seems a little better if i don't display some columns (like permissions, owner and so on).

So, if the underlaying filesystem is slow, dolphin will hangs here and there.
Konqueror from 3.5 is far snappier.
Comment 4 doc.evans 2010-04-09 17:23:10 UTC
I don't see how it can be an underlying filesystem problem. 

Responsiveness is great from the command line, so it seems to me that it has to be a KDE issue (whether it's a kio issue or not I'm less sure, but I'm as certain as I can be that it's a KDE problem).
Comment 5 tony 2010-05-04 12:44:44 UTC
Bug confirmed in kde 4.3.3.
Filesystem bugs can be excluded because
- the bug is present even when browsing an smbfs/cifs mounted shares
- konqueror from kde3.4 is much responsive

Please, vote for this bug.
Comment 6 Antonio Orefice 2010-05-23 13:09:10 UTC
I've made some benchmarks comparing dolphin,konqueror, krusader ,pcmanfm and thunar, they have to Show a remotely mounted sshfs directory with 1997 items (all folders) and scroll; all on a 10Mbps half duplex link.

* Terminal (ls -la): 1.313sec

* Dolphin (kde4.3.3): about 10 seconds (but scrolling is almost impossible as dolphin seems to be frozen, it is even hard to close it)
* Konqueror (kde4.3.3): about 13 seconds (same issues with scrolling/freezing,closing)
* Krusader (2.0.0 for kde4) with mimetype magic disabled: about 6 seconds (scrolling is still a pain, but faster than konqueror and dolphin for kde4)


* krusader (1.9.0 for kde3): about 2 seconds (scrolling is very fast)
* pcmanfm: about 1.5seconds (no issues in scrolling/closing)
* thunar:  about 1.5seconds (no issues in scrolling/closing)
* rox:     about 1.5seconds (no issues in scrolling/closing)

Hoping this will help to clarify that issues with speed are *not (only)* related to the filesystem, so i think this should be addressed as a kio problem with slow filesystems.
Comment 7 Stefan Schramm 2010-08-01 14:13:34 UTC
*** This bug has been confirmed by popular vote. ***
Comment 8 Stefan Schramm 2010-08-01 14:31:04 UTC
When running tcpdump you can see that many packets (> 100) are transferred just when resizing the window or hovering above a directory.
Comment 9 Alvaro Aguilera 2010-12-03 10:51:07 UTC
Dolphin is also very slow when copying files via ssh. 

I did a simple test and it took dolphin 4 minutes and 20 seconds to copy what you can copy with scp in just 2 minutes and 30 seconds. Twice as much.
Comment 10 Nikita Melnichenko 2011-03-07 09:36:56 UTC
It seems that dolphin/krusader tries to read file/directory contents when I simply select the element. Directory listing is much slower than the one in the mc. Unfortunately, this is not usable. :(
Comment 11 t68b 2011-04-18 16:47:14 UTC
(In reply to comment #4)
> I don't see how it can be an underlying filesystem problem. 
> 
> Responsiveness is great from the command line, so it seems to me that it has to
> be a KDE issue (whether it's a kio issue or not I'm less sure, but I'm as
> certain as I can be that it's a KDE problem).


i'm seeing the same behavior in KDE 4.6.2.  one additional data point -- kdevelop4

at shell, navigating sshfs mount is fast/responsive
in Dolphin, slow/sluggish, as reported here

but,

navigating same tree in kdevelop4 is fast/responsive.

for reference, i'm running

 kdevelop4-4.2.0-46.7.x86_64
 dolphin-4.6.2-5.1.x86_64
Comment 12 Nikita Melnichenko 2011-04-22 19:56:12 UTC
Created attachment 59213 [details]
fs requests of krusader reading /

FUSE debug mode shows all FS requests that krusader made
Comment 13 Nikita Melnichenko 2011-04-22 19:56:43 UTC
Created attachment 59214 [details]
fs requests of mc reading /

FUSE debug mode shows all FS requests that mc made
Comment 14 Nikita Melnichenko 2011-04-22 19:56:52 UTC
Created attachment 59215 [details]
fs requests of ls reading /

FUSE debug mode shows all FS requests that 'ls -la' command made
Comment 15 Nikita Melnichenko 2011-04-22 20:29:57 UTC
I used 'bindfs -d' to grab all requests when using different methods to list directory. Results for 20 directories in the '/' are:

Krusader: 258 requests.
MC: 69 requests.
ls -la: 49 requests (but note that it tries to read xattrs)
ls: 27 requests (seems to be the optimal number, because it only calls getattr for each element that is a 'lstat' call for each file, and it's sufficient to form a full-featured listing)

So, the Krusader does to many excessive requests. Even if KDE programs need all that /dir/.directory requests, it would be optimal to have only about 50 requests...
Comment 16 sushyad 2011-04-29 20:42:19 UTC
I can confirm this on KDE 4.6.2, Dolphin 1.6.1

The remote directory is mounted using cifs with these options: directio,rsize=130048

Command line copy from remote CIFS server is ~30 MiB/s
Dolphin copy from remote CIFS server is ~10MiB/s
Konqueror copy from remote CIFS server is ~10MiB/s
Comment 17 Thomas Langkamp 2011-10-14 22:31:43 UTC
I want to confirm this bug, using kde 4.7.2 with opensuse tumbleweed 64bit and nfs4 transfers under dolphin. Dolphin freezes for up to 30 seconds then responds for 5 seconds, then freezes for 30 seconds and so on...
Comment 18 pier andre 2011-10-15 10:54:34 UTC
Me too want to confirm this bug, using kde 4.7.2 with opensuse tumbleweed 64bit. if I transfer large files (10 movies 0.7GB each) using sftp://... protocol under dolphin, transfere starts, but if, after the transfere is started, I try to change directory dolphin freezes for minutes, in practice during transfering dolphin is unusable in that sftp://, if I use konqueror it doesn't happen.
I would like to point light on this bug that is very ugly.
Comment 19 tony 2011-10-15 10:58:42 UTC
Please, do not mix bugs.
We're talking about mounted and (relatively) slow network filesystems.
kio_sftp doesn't mount anything, nor tries to read a lot of attributes from the files.
Comment 20 David Faure 2011-10-28 17:11:32 UTC
Obviously KIO thinks the file system is "local" while in fact it's remote. So it does operations that end up being too slow.
I initially thought about previews and mimetype determination, but the first one can be turned off in the menus, and the second one is delayed anyway.

Slow copying is another issue, possibly fixed by the much more recent commit 2cd2d1a4cfa1 on Sep 21, 2011, bug 257907 and bug 258497. Let's only talk about listing and reloading here, the bug title says "navigating".

[My testcase for future reference: sshfs www.davidfaure.fr: /mnt/df ; cd /mnt/df/txt.git/objects ]

Ah! With some breaking-in-gdb (poor-man's profiling), I found it. What's really slow is subdirectories, in detailed view mode, because it lists every subdir in order to display the number of items. And this it does NOT do in a delayed manner:

#0  0x00007f76af211b0a in __getdents64 () from /lib64/libc.so.6
#1  0x00007f76af2114d1 in readdir64 () from /lib64/libc.so.6
#2  0x00007f76b46d0dba in KDirModel::data (this=0xccfce0, index=..., role=743246400) at /d/kde/src/4/kdelibs/kio/kio/kdirmodel.cpp:739
#3  0x00007f76a3809f8f in DolphinModel::data (this=0xccfce0, index=..., role=743246400) at /d/kde/src/4/kde-baseapps/dolphin/src/views/dolphinmodel.cpp:119
#4  0x00007f76b179436d in QSortFilterProxyModel::data (this=0xcdcd50, index=..., role=743246400) at itemviews/qsortfilterproxymodel.cpp:1716
#5  0x00007f76b46f51cd in QModelIndex::data (this=0x7fff1e66f0a0, arole=743246400) at /d/qt/4/qt-for-trunk/include/QtCore/../../src/corelib/kernel/qabstractitemmodel.h:398
#6  0x00007f76b46ed564 in KFileItemDelegate::Private::itemSize (this=0xcaa6c0, index=..., item=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:226
#7  0x00007f76b46f11e3 in KFileItemDelegate::Private::display (this=0xcaa6c0, index=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:987   
#8  0x00007f76b46f0640 in KFileItemDelegate::Private::initStyleOption (this=0xcaa6c0, option=0x7fff1e66e800, index=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:857
#9  0x00007f76b46f2027 in KFileItemDelegate::paint (this=0xcaa510, painter=0x7fff1e66f570, option=..., index=...) at /d/kde/src/4/kdelibs/kio/kio/kfileitemdelegate.cpp:1252

Fixed (skipped for nfs/smb mounts), see commit in next comment.

Then there is the reading of icon name (and comment) in "foo/.directory" (KFileItem::iconName calling KFolderMimeTypePrivate::iconName).
Even though it's cached (so it happens only once per dir), it still makes things quite slow. 
=> Fixed too (skipped for nfs/smb mounts).

The last issue seems to be when navigating away from a directory, KDirListerCache::forgetDirs calls manually_mounted for -each item-, which calls realFilePath, which stats... There's already a TODO for porting that code to Solid, assigned to afiestas ;-)
Comment 21 David Faure 2011-10-28 17:17:09 UTC
Git commit 02b7eb5d92daae4373e7d38d2d952a688bd42079 by David Faure.
Committed on 28/10/2011 at 19:12.
Pushed by dfaure into branch 'KDE/4.7'.

Detect network mounts and skip slow operations while listing them.

On NFS/SMB/SSHFS, KIO no longers counts items in directories (e.g.
for dolphin in detailed view mode), nor does it look at subdir/.directory
files for custom icons or comments.

This commit will improve speed greatly in kdelibs-4.7.4 and later.
There's just one issue remaining, when navigating away from a directory
with many items, but that's a TODO for afiestas -- if he agrees :)

CCBUG: 178678

M  +16   -1    kdecore/io/kfilesystemtype_p.cpp
M  +5    -3    kdecore/io/kfilesystemtype_p.h
M  +2    -1    kio/kio/kdirmodel.cpp
M  +43   -9    kio/kio/kfileitem.cpp
M  +7    -0    kio/kio/kfileitem.h
A  +55   -0    kio/kio/kfileitemaction_twomethods-reverted.diff

http://commits.kde.org/kdelibs/02b7eb5d92daae4373e7d38d2d952a688bd42079
Comment 22 tony 2011-10-28 17:31:38 UTC
Thank you for looking and fixing this;
I looked a bit to the code, wouldn't be better to leave some room for other filesystems too?
I mean that as far as i understood, non "normal" filesystems seems to be hardcoded in kfilesystemtype_p; what if i would use an exotic one?
There's still curlftpfs, httpfs and god only knows how many fuse implementations :)
I'm not a c coder and i don't know how much trouble it could cause, but what about a .rc file filled with slow filesystems?
Comment 23 David Faure 2011-10-29 07:38:49 UTC
Sure, feel free to give me a list of "network mounts" filesystems :-)

A config file can't do, because there are two implementation of KFileSystemType: one which looks at the string returned by mount (the code where I added fuse.sshfs), and one (faster, and used on linux) which looks at statfs.f_type "magic numbers", see the code where I added SSHFS_SUPER_MAGIC. So to add detection of new filesystem types, I need "super-magic" numbers as well.

I still want the fallback for unknown filesystems to be "local and normal", since that's the most feature-full case, and people seem to come up with new local filesystems all the time.

curlftpfs and httpfs are based on fuse like sshfs, right? I guess I could provide a small test prog that outputs the statfs.f_type value for a given directory, unless someone can find a documentation about these values somewhere...
Comment 24 aditsu 2011-10-29 10:37:23 UTC
If possible, I suggest automatically detecting all filesystems based on fuse.
Also, it would be nice if the user could manually switch between "local" and "network" browsing modes for a location.
Comment 25 Heinz Wiesinger 2011-10-29 10:53:38 UTC
(In reply to comment #24)
> If possible, I suggest automatically detecting all filesystems based on fuse.

You can't generalize like that. There are local filesystems based on fuse as well (like ntfs-3g). Treating them as remote filesystems would just be plain wrong.
Comment 26 tony 2011-10-29 11:01:13 UTC
@David Faure:
> Detect network mounts and skip slow operations while listing them.
Is 'skip' intended as 'never do it' or just 'delay it'?
Because in the second case the logic could be used for network and local filesystems too (?)
Comment 27 tony 2011-10-29 11:09:38 UTC
(In reply to comment #23)
> A config file can't do, because there are two implementation of
> KFileSystemType: one which looks at the string returned by mount (the code
> where I added fuse.sshfs), and one (faster, and used on linux) which looks at
> statfs.f_type "magic numbers"

Couldn't that file be in the format:
[network_filesystems_strings]
.
.
.
[network_filesystems_super_magic]
.
.
.
(EOF)
...so that the first that matches automatically identifies the fs type?

I intend the use of that file just as a fallback where the hardcoded values are unable to identify the filesystem, so the performance impact should not be high and the user will not need to wait for another bug report to be closed and another kde-libs to be released to add support to a newly created fs.
Comment 28 Alex Fiestas 2011-10-29 17:54:39 UTC
> This commit will improve speed greatly in kdelibs-4.7.4 and later.
> There's just one issue remaining, when navigating away from a directory
> with many items, but that's a TODO for afiestas -- if he agrees :)
I do agree but, what do I have to do ?

Your words my commands.
Comment 29 David Faure 2011-10-31 10:51:40 UTC
Tony: and how would you figure out the "super magic" for these file systems, when I couldn't even find it in the header where these are usually defined, so I had to reverse-engineer it?
Anyway, KDE is already very configurable, but I don't think this needs a config file, i.e. delegating the problem to users. This should rather be done right in the code, so that it only needs to be solved once, not by every user. Whatever you would have put in that config file, tell me, and I'll put it in the code :-)

Alex: see `grep afiestas kdirlister.cpp`
Comment 30 Michael Berlin 2011-11-08 10:07:53 UTC
Hi,

I'm the developer of the fuse client of the distributed file system XtreemFS (see www.xtreemfs.org for more information). Can you please also include our file system and recognize it as "network file system"?

Our client reports a file system type of the form "xtreemfs@<server url>", so checking for "xtreemfs@" in the beginning would be sufficient:

$ df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
xtreemfs@demo.xtreemfs.org/demo
                       20G  7.0G   13G  36% /mnt

Btw: I cannot tell you another super magic number than the one sshfs does report. See:

$ strace stat -f /mnt 2>&1|grep statfs|grep mnt
statfs("/mnt", {f_type=0x65735546, f_bsize=131072, f_blocks=161229, f_bfree=104645, f_bavail=104645, f_files=2048, f_ffree=2048, f_fsid={0, 0}, f_namelen=1024, f_frsize=131072}) = 0

From my file system in userspace I don't have the possibility to set the struct member f_type. Instead I've found this super magic number in the Fuse kernel code:
http://lxr.free-electrons.com/source/fs/fuse/inode.c#L50

So I guess you'll have to rely on the file system type to differ between fuse implementations.

Best regards,
Michael
Comment 31 David Faure 2011-11-08 10:20:58 UTC
Git commit 8b57d2b80329c9d005145354bdd5db8de3d6ede6 by David Faure.
Committed on 08/11/2011 at 11:20.
Pushed by dfaure into branch 'KDE/4.7'.

Sigh, with FUSE nothing is ever simple.

Thanks to Michael Berlin for the information about that. Add xtreemfs.
CCBUG: 178678

M  +7    -5    kdecore/io/kfilesystemtype_p.cpp

http://commits.kde.org/kdelibs/8b57d2b80329c9d005145354bdd5db8de3d6ede6
Comment 32 Antonio Orefice 2011-11-08 11:04:03 UTC
Looking at the code, it seems that curlftpfs is still missing, and from what i understood fuse has just one magic number.
So, heres how a filesystem using curlftpfs appears in my mounts:

Gozer ~ # mount|grep curlftpfs
curlftpfs#ftp://s1/ on /mnt/ftp type fuse (rw,nosuid,nodev,noatime)
curlftpfs#ftp://s2:212/T/ on /mnt/nasftp type fuse (rw,nosuid,nodev,noatime)

thanks!
Comment 33 Lars Altenhain 2011-12-14 10:54:02 UTC
http://commits.kde.org/kdelibs/02b7eb5d92daae4373e7d38d2d952a688bd42079 has a side effect on setups where the home directories of the users are mounted from an nfs-server. When using the Desktopicons Activity the icons on the Desktop all show the gear symbol because ~/Desktop is considered a "slow" folder.
Comment 34 Markus Grabner 2011-12-31 11:42:31 UTC
I can confirm this side effect, there is also a forum thread on this issue started by another user (http://forum.kde.org/viewtopic.php?f=18&t=98122).

Konqueror has separate file size limits for the preview of local and remote files. How difficult would it be to implement a similar configuration to select whether icons should be displayed for remote directories
1.) either globally, or
2.) on a per-directory basis?

Before KDE 4.7.4, I never noticed any performance difference on the desktop between a local and remote home directory (on a 100MBit ethernet network), so for me the proposed fix is actually a regression. Due to the vastly different requirements in different situations I think it is very hard to automatically decide whether icons should be displayed, therefore a user choice seems preferrable IMO. What do yout think?

Kind regards,
Markus
Comment 35 Nikita Melnichenko 2011-12-31 15:26:45 UTC
BTW, supermagic criteria doesn't cover the case of encfs being mounted over sshfs, because encfs is not a remote filesystem in any way.
I suppose we need a different decision here. A global option to halt any excessive search would be just great! Unfortunately, I doubt it would ever be implemented...
Comment 36 mal 2011-12-31 18:42:40 UTC
It was me that started the thread in Comment 34. This side effect
from a purely cosmetic point of view is pretty awful. It makes the folder 
view desktop widget look terrible as it only shows the mimetype icons not
the proper link icon, so you ( with opensuse ) get by default a desktop
widget with 5 or 6 cogs in it. It hardly sells the desktop. A little option
in the widget to let you see the real icons in this case only ie the desktop
folder would be nice . Is that possible ?

Mal
Comment 37 Markus Grabner 2012-01-01 20:45:34 UTC
Although it is now almost one year until Christmas :-), I'd like to sketch a different solution:

1.) When the user requests to display the contents of a directory (including the case of the desktop folder), the system tries for a fixed amount of time (e.g., half a second) to load all icons in the same way as it is currently done for a local file system.
2.) If loading all icons completes successfully before the timeout, the folder content is displayed with proper icons.
3.) If the timeout is exceeded for whatever reason (e.g., requesting an sshfs-mounted directory containing many files), those items for which an icon has been loaded (if any) are displayed, for other items the cog wheel is displayed.
4.) After the timeout, loading the icons is continued in the background, and the view is updated continuously as more icons are loaded. Thereby the user can start browsing the folder contents without having to wait for all icons to be loaded.
5.) Those icons which are currently visible (taking into account if the user scrolls down) are fetched first.

With this approach, the user interface would always feel responsive, and there is no need to define a criterion to decide whether or not to load the appropriate icons. The only parameter here is the timeout, which, however, is not very critical (it must be small enough to avoid unpleasant delays, and large enough to prevent even files on a local file system to show up as cog wheels for a tiny moment). I implemented a similar method in a photo browsing application to display photo thumbnails, and I really like its responsive behaviour when accessing a collection of ~10000 photos.

What do you think about this proposal? Would it require a major redesign of the existing software to implement it?

Kind regards,
Markus
Comment 38 Maxime Chassagneux 2012-01-13 13:21:11 UTC
I think that the Markus Grabner proposal is really good !
I've opened this bug 290680, that is related to this thread, and your workaround could fix it too !
Comment 39 Antonio Orefice 2012-01-13 13:27:43 UTC
I agree, having a full async filemanager (not only for thumbnails) would be perfect.
Comment 40 Alvaro Aguilera 2012-01-13 13:31:12 UTC
How about just stopping non-critical processes after timeout instead of moving them to the background? I don't like to push the dust under the carpet.
Comment 41 Maxime Chassagneux 2012-01-13 13:33:55 UTC
As these processes are async why not...
Comment 42 Antonio Orefice 2012-01-13 13:36:33 UTC
@#39:
It would not be like push the dust under the carpet, but more like to trow it away as you walk.
What i imagine is just like thumbnails view, where what is really updated is just what is needed (aka what you see)
Comment 43 Alvaro Aguilera 2012-01-13 13:39:10 UTC
Because it would be easier to implement and less error prone. This bug was opened in 2008 and is still unresolved. Going fully async. would probably create more problems than it solves.
Comment 44 Antonio Orefice 2012-01-13 13:43:45 UTC
...probably you're right,
but dropping features because they're not easy to implement is (imho) a big lose.
Comment 45 Antonio Orefice 2012-01-13 13:46:02 UTC
Anyway a big step forward in identify some network filesystem has been done, dolphin now is much snappier here with 4.7.4, so there would be no (so much) hurry anymore.
Comment 46 Maxime Chassagneux 2012-01-13 13:49:01 UTC
But since the 4.7.4, a regression with the personal icon display feature was disappeared  !
Comment 47 knallio 2012-01-24 12:09:00 UTC
We have NFS mounted \home directories in our office with Gbit network. We never had any performance issues, but now our users are complaining that the number of elements of a folder are not displayed anymore. So the possibility to enable/disable the directory listing for NFS would be really great.
Comment 48 Antonio Orefice 2012-01-30 08:52:12 UTC
In kde 4.8 dolphin shows the number of items contained in directories as soon as you scroll, good!
Personal icon display is still missing, but i bet it will be the next step.
Comment 49 David Faure 2012-02-06 14:19:22 UTC
Re comments 33, 34, and 36 : this looks like a bug in folderview. If it's still not fixed, please report it there, I don't maintain that code.

About the idea of making this delayed: yes it's probably the best solution. We already do delayed mimetype detection (for cases where the extension is unknown or ambiguous), we could also do delayed ".directory-file loading". Peter, what do you think? I'm not sure how much dolphin is still using kdelibs for the delayed mimetype determination stuff...
Comment 50 Peter Penz 2012-02-06 14:50:08 UTC
> Peter, what do you think? I'm not sure how much dolphin is
> still using kdelibs for the delayed mimetype determination stuff...

@David: Dolphin only uses KFileItem and KDirLister from kdelibs. So in case if we do a delayed mimetype detection it would absolutely fine from a Dolphin point of view that the .directory file is read asynchronously from kdelibs and the icon is updated in Dolphin later. If you have time to implement this, please just forward me the patch so that I can check it in the scope of Dolphin (or keep me on CC when pushing the patch). Thanks!
Comment 51 Stef Bon 2012-03-01 17:53:40 UTC
HI,

I'm working with a FUSE fs and experienced the same. The filesystem provides by design a .directory which fits at the target, like a smb share, with a smb share icon and like a hard disk, with a harddrive icon, and a usb stick, also with a removable pendrive icon. This is not slow at all, so the very strickt choice to not try to process the .directory is not the best option in my opinion.

In my opinion a heuristic manner, like described in comment 37 is better. And maybe some support of other services as well. I develop a successor of gamin, the filesystem change notifier, and think of enabling some caching abilities.

Caching the .directory file is one of the possibilities....

Stef
Comment 52 Stef Bon 2012-03-03 08:02:19 UTC
Hi,

I've been looking for a problem of this issue. I aggree that a backthround thread or other process doing the (slow) reading of mimetype and .directory files might work and bring the reading a lot faster. 

But the problem here is what causing this. First the determination of the filetype (mimetype for regular files). It can be slow, and doing that over and over again s silly. Why not use the field st_rdev part of the stat struct. It's not used for regular files, and large enough to hold an unique id to a (standard) database of filetypes. 

This st_rdev attribute is only used special files as far as I know, and is zero for regular files and directories. On my machine the type is or unsigned long int, or better unsigned long long int. 

In any case it's big enough.

It's safe to use an id, A mp3 file will not turn in a ogg file suddenly, a pdf will never become a doc or something else.
Apps creating these files can set this value, and apps using the file can set it when it's not been set before, or correct it when not set to the right value.

When using this value, there is no other process required, it comes to an app like dolphin just like the mode (filetype, owner, group, permissions, times) does.

Futher when it comes to directories, the reading of the .directory file is a time consuming task. In my FUSE fs, I'm using the mimetype for directories like:

mimetype="inode/directory;role=remote.net.smb.network"

for the directory representing the SMB network for example. More examples:

local.map.system.programs
local.dev.cdrom.audio
local.map.documents
local.map.audio

etc.

Because the mimetype gives you a default icon (just like this mimetype does for regular files) it's not required to use a .directory file for setting and using the .directory file for an icon.
And it's possible to store this mimetype for directories also in the st_rdev field.

I'm not saying that .directory files are useless, but using the mimetype for directories can speed up things. And - as I see it - directories can have a special meaning within the context of the computer or the user, and till now there is no way to set this. Using the role part of the mimetype makes that possible.

Stef
Comment 53 bohwaz 2012-05-02 17:54:06 UTC
Same problem with icons showing as cog wheels on KDE 4.7.4 (debian testing) with NFS home. I never had any slowness problem with KDE before, so did you remove this feature?
Comment 54 David Faure 2012-10-09 11:50:53 UTC
Clearly the solution to make everyone happy is to load icons delayed, just like mimetype determination and previews. With fast filesystems users won't notice any difference, and with slow filesystems, users will get a directory listing as fast as possible, and details later on.
Comment 55 Bjoern Olausson 2012-10-29 08:19:55 UTC
This issue drives me crazy. Browsing a NFS/CIFS share which is mounted via my DSL Internet connection is no fun - I have to do it multiple times a day...

A simple option which prevents Dolphin from reading the mime types and especially recursively counting the numbers of items (which is totally uninteresting by the way) in the displayed folders would do the trick for me - removing the tab does in my test not stop Dolphin from counting...

Cheers,
Bjoern
Comment 56 Alvaro Aguilera 2012-10-29 08:31:28 UTC
I think Dolphin's developers should use a slow, high latency connection to a remote folder-tree with thousands of files as their _primary_ test environment. If a feature works there well, then it will also shine with the local SSD drive. And if it doesn't, then it's probably a bad idea to add that feature.
Comment 57 Evstifeev Roman 2013-02-11 18:13:02 UTC
Just bumped into this problem with a smb directory, mounted with mount.cifs. There are 10k avi video files there, and opening it in dolphin shows empty directory for more than 3 minutes, while loading. (this is over wi-fi connection)
Opening same directory with KIO (smb:// link) loads and show directory contens iteratively, but still takes over 3 minutes to load completely
Comment 58 David Faure 2013-03-27 13:39:14 UTC
> 
> Git commit 6369b556ae9beef6888699d23b91326bac950ba4 by David Faure.
> Committed on 27/03/2013 at 14:29.
> Pushed by dfaure into branch 'KDE/4.10'.
> 
> Implement delayed loading of slow icons (from .desktop or .directory files)
> 
> Thanks to Pierre Sauter for making me fix this, and for testing that it
> works over sshfs.
> 
> FIXED-IN: 4.10.2
> 
> M  +19   -3    kio/kio/kfileitem.cpp
> M  +6    -0    kio/kio/kfileitem.h
> M  +2    -0    kio/tests/kfileitemtest.cpp
> 
> http://commits.kde.org/kdelibs/6369b556ae9beef6888699d23b91326bac950ba4
Comment 59 James 2014-01-09 17:35:50 UTC
Upgraded to LinuxMint 16 with KDE 4.11.3 and Dolphin 4.11.3 and this bug seems to be back.

Worked fine on LM 15. Please let me know if you need more information.
Comment 60 Julian Kalinowski 2014-02-19 09:21:50 UTC
I was just experiencing a similar problem when browsing a CIFS-mounted share with a slow implementation of the dfree_command (using greyhole storage pooling, but the only thing that matters here is that counting free disk space is relatively slow).

When i use dolphin to access a folder on the CIFS-mounted share, the disk free command gets called once for every subfolder in detail view or when you mouse-over so the information side bar is refreshed, even though it is the same filesystem and the information is never displayed to the user.
I think this behaviour is a bug.

This doesn't matter with a fast df command, as samba provides by default, but with a 0.1s execution time command (greyhole), it really slows navigating down.
Comment 61 Bjoern Olausson 2014-02-19 15:55:11 UTC
I agree for 100%. Polling any information on a slow connection has to be avoided. It is more useful to quickly browse the directories then having mimetype items and stuff appearing after 10 minutes for a large folder. If you want to hav information add a button "fetch info" or rightklick on file/dir to show properties. Don't even think about doing it in the backround as it will slow down the connection.

This slows down my entire KDE desktop and occasionally freezes Dolphin.
Comment 62 James 2014-02-19 18:38:17 UTC
I can watch it SLOWLY populate file counts.

This is also over a LAN, 1GB ethernet from my machine to the server, so it's not just slow connections. It doesn't usually crash Dolphin on lan. But it still slow.



(In reply to comment #61)
> I agree for 100%. Polling any information on a slow connection has to be
> avoided. It is more useful to quickly browse the directories then having
> mimetype items and stuff appearing after 10 minutes for a large folder. If
> you want to hav information add a button "fetch info" or rightklick on
> file/dir to show properties. Don't even think about doing it in the
> backround as it will slow down the connection.
> 
> This slows down my entire KDE desktop and occasionally freezes Dolphin.
Comment 63 Albert Vaca Cintora 2014-02-19 19:07:02 UTC
Yes, this looks like a regression in the new version of Dolphin.
Comment 64 Antonio Orefice 2014-02-20 12:49:41 UTC
Will somebody read those comments or do we need to open a new one?
Comment 65 empire 2014-08-07 05:39:56 UTC
Yes please, fix this. 

NO metadata loading at all on networked drives. It's pretty straightforward. The delays over a slow WAN are huge and not worth it.
Comment 66 Nate Graham 2018-04-23 22:11:52 UTC
*** Bug 215953 has been marked as a duplicate of this bug. ***
Comment 67 Germano Massullo 2020-06-06 10:40:43 UTC
*** Bug 420168 has been marked as a duplicate of this bug. ***
Comment 68 Germano Massullo 2020-06-06 10:44:54 UTC
(In reply to Germano Massullo from comment #67)
> *** Bug 420168 has been marked as a duplicate of this bug. ***
Summary of my bugreport:
1 Gbit/s network connection to the NAS. I mount the remote folder with command
$ sshfs -o cache=yes -o kernel_cache -o compression=no user@ip_address:/zpool_1/foo /tmp/folder
Dolphin is very slow to populate view while browsing big folders.
When I use GNOME Nautilus instead, is as fast as a local folder.
I noticed that, after mounting the remote storage, if I browse the folder for the first time with Nautilus, and then with Dolphin, the latter will be fast as it should be
Comment 69 Ahmad Samir 2020-06-06 13:26:07 UTC
Could you also test with krusader? (both dolphin and krusader use KIO).
Comment 70 Harald Sitter 2020-06-08 09:55:05 UTC
Alas, can't reproduce.

- Are we all talking about sshfs mounts?
- Are we all talking about standard local networks without tunnels and the like?
- What version of kde frameworks does this happen with?
- What version of dolphin does this happen with?
- Is it also slow when you run `time kioclient5 ls 'file:/tmp/folder'`
- What's the entire output of that kioclient5 command?
- Are dolphin previews enabled?
- What does the content of the directory that this happens with look like? How many files are there? What type of file are? Are they very large or small or just a mixed bag? How many subdirectories are there? How many files and what type of files do they contain on average? The greater detail the better.
Comment 71 Germano Massullo 2020-06-12 16:30:57 UTC
(In reply to Harald Sitter from comment #70)
> Alas, can't reproduce.
> 
> - Are we all talking about sshfs mounts?

Myself, yes

> - Are we all talking about standard local networks without tunnels and the
> like?

Myself SSHFS running on local area network

> - What version of kde frameworks does this happen with?

KDE Frameworks 5.70.0

> - What version of dolphin does this happen with?

19.12.2

> - Is it also slow when you run `time kioclient5 ls 'file:/tmp/folder'`

FOLDER 1
real    0m0,254s
user    0m0,069s
sys     0m0,035s

On Dolphin took almost a 1 minute to populate size column and be responsive again

FOLDER 2 (a big subfolder of folder 1)
real    0m0,136s
user    0m0,070s
sys     0m0,031s

On Dolphin took almost a 1 minute to populate size column and be responsive again

> - What's the entire output of that kioclient5 command?

I cannot paste here the content of my folders

> - Are dolphin previews enabled?

No

> - What does the content of the directory that this happens with look like?

folder 1 and folder 2 contain .jpg .cr2 .nef .xmp .mp4 files

> How many files are there? What type of file are? Are they very large or
> small or just a mixed bag? How many subdirectories are there? How many files
> and what type of files do they contain on average? The greater detail the
> better.

Dolphin folder 1 properties returns 38243 files, 807 subfolders.
The average filesize is 10 MB


**Also folder properties takes ages to retrieve the number of files and folders**
Krusader seems to behave waaaaay better
Comment 72 Harald Sitter 2020-06-15 11:07:23 UTC
Created attachment 129379 [details]
script to generate a large content tree

19.12.2 is no longer supported.
Can anyone reproduce with 20.04.1 or git master?

What's more, I can't reproduce this. I've setup a tree similar to the one described (gen_tree.rb attachment) with 38k files and 800 folders. I access it from a VM over a gigabit network with dolphin 20.04+kf5.70 in details view mode and it takes maybe a second or two to load the content. Both server and client use openssh 8.2.

Without further information I'm not sure we can do anything here. Perhaps it's already fixed in 20.04, perhaps it's not got anything to do with KIO or dolphin.

client-side command for the record:
mkdir /tmp/folder; echo 1 | sudo tee -a /proc/sys/vm/drop_caches; sudo umount /tmp/folder; sshfs -o cache=yes -o kernel_cache -o compression=no me@bear.local:/srv/foo /tmp/folder && dolphin --new-window /tmp/folder/folder1
Comment 73 Germano Massullo 2020-06-16 16:07:15 UTC
(In reply to Harald Sitter from comment #72)
> Created attachment 129379 [details]
> script to generate a large content tree
> 
> 19.12.2 is no longer supported.
> Can anyone reproduce with 20.04.1 or git master?

I installed neon-unstable-20200614-1102.iso
that ships
Dolphin 20.07.70
KDE Frameworks 5.72.0
Qt 5.14.2
Krusader 2.8.0-dev
GNOME Nautilus 3.26.4

I can still reproduce the problem with Dolphin, it's a bit more responsive than 19.12.2 but it still hangs.
Krusader performs much better but not as good as Nautilus which is almost with no stuttering while browsing huge directories.
Note that I rebooted the system everytime I had to test another file manager, because this problem happens only the first time you open the directory after a system boot

Some outputs of
time kioclient5 ls 'file:/tmp/folder'
runned on some of the folders that trigger the problem

real    0m0,222s
user    0m0,087s
sys     0m0,026s

real    0m0,154s
user    0m0,067s
sys     0m0,040s

real    0m0,172s
user    0m0,084s
sys     0m0,026s
Comment 74 Harald Sitter 2020-06-17 08:56:20 UTC
Does the server have a HDD or an SSD?
While dolphin hangs what are the top CPU consumers in ksysguard?
Also, while it hangs do you see network activity in ksysguard?
Comment 75 Harald Sitter 2020-06-17 09:31:20 UTC
... and Does it make a difference if you disable the information panel on the right, or disable the Preview in there?
Comment 76 Germano Massullo 2020-06-24 13:07:15 UTC
(In reply to Harald Sitter from comment #74)
> Does the server have a HDD or an SSD?
> While dolphin hangs what are the top CPU consumers in ksysguard?
> Also, while it hangs do you see network activity in ksysguard?

Thanks to people of #zfsonlinux IRC channel I got a way to reliable reproduce the problem. On server
# zpool export tank
# zpool import tank
clears the entire ZFS ARC cache. Disks are HDD in RAID-Z configuration.
After each time I remount the SSHFS mount on the client.

I have runned tests several times on both GNOME Nautilus and KDE Dolphin, running zpool export / import and what I found out is:
both Nautilus and Dolphin need much time to populate the "Size" column, and both have network usage during this activity, but:
- Nautilus does not hang and the user can browse folder without being stuck
- Dolphin starts to lag and the user interface becomes stuck.

During this period of time, client and server are communicating and the network activity is in the order of ~tens of kB/s.

So I really think it is a matter of how Dolphin threads are handled

(In reply to Harald Sitter from comment #75)
> ... and Does it make a difference if you disable the information panel on
> the right, or disable the Preview in there?

I never used preview and information panel on the right.

Concerning Krusader, in my opinion it is way smoother than Dolphin because it does not calculate size of subfolders because you can only see <DIR> entry on Size column. The size is calculated only for files in the current path, not for subfolders
Comment 77 Harald Sitter 2020-06-24 15:32:26 UTC
I am still not able to reproduce this :/
Are you 100% confident that previews aren't enabled? Do you know how to use gdb so we can verify that?

For good measure I did try with previews and I can kind of reproduce the intermittent lockups, but only when previews are enabled. I'm starving IO responsiveness by setting a cap on read requests per second `IOReadIOPSMax='/dev/sdc 4'` of the ssh slice and that then indeed causes dolphin to get choppy as it is waiting. What seems to happen there is that a mimetype query is issued before starting a PreviewJob [1] and that is a read operation unless the KFileItem was explicitly told not to look at the content [2]. Without previews nor the info sidebar I can even limit IO per second to 1 and dolphin will stay responsive. So, something still doesn't add up here.

[1] https://invent.kde.org/system/dolphin/-/blob/master/src/kitemviews/kfileitemmodelrolesupdater.cpp#L905 
[2] https://invent.kde.org/frameworks/kio/-/blob/master/src/core/kfileitem.cpp#L506
Comment 78 Germano Massullo 2020-06-26 17:35:59 UTC
Created attachment 129705 [details]
gwenview multiple files "move to"

(In reply to Harald Sitter from comment #77)
> Are you 100% confident that previews aren't enabled?

Yes and moreover I found out that this happens also on Gwenview when:
1. I select **more than one picture** pictures, then rightclick "Move to"
2. on the new move to window, there is a path textbox. As soon I start to write the path that is in the remote SSHFS mount, Gwenvew completely freezes and I can see the same network activity between the server and the client.

> Do you know how to use gdb so we can verify that?

I know how to use it, but I would like to receive detailed step by step procedure so I can better provide all informations you may need
Comment 79 Harald Sitter 2020-06-29 09:31:49 UTC
Actually on second thought we'll not need gdb at all

- close all running instances of dolphin
- open ksysguard
- search for 'thumb'
- if thumbnail or thumbnail.so or anything similar is running make sure to terminate it before proceeding
- once it is closed, run
- KDE_FORK_SLAVES=1 dolphin --new-window /the/path/of/our/mount
- scroll around a bit
- check ksysguard again
- if thumbnail.so is running (again) then previews are being generated somehow

should thumbnail.so not get started again then previews are disabled otherwise they somehow are enabled (in which case we know its the previewing starving the IO).
Comment 80 Bug Janitor Service 2020-07-14 04:33:08 UTC
Dear Bug Submitter,

This bug has been in NEEDSINFO status with no change for at least
15 days. Please provide the requested information as soon as
possible and set the bug status as REPORTED. Due to regular bug
tracker maintenance, if the bug is still in NEEDSINFO status with
no change in 30 days the bug will be closed as RESOLVED > WORKSFORME
due to lack of needed information.

For more information about our bug triaging procedures please read the
wiki located here:
https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging

If you have already provided the requested information, please
mark the bug as REPORTED so that the KDE team knows that the bug is
ready to be confirmed.

Thank you for helping us make KDE software even better for everyone!
Comment 81 Bug Janitor Service 2020-07-29 04:33:11 UTC
This bug has been in NEEDSINFO status with no change for at least
30 days. The bug is now closed as RESOLVED > WORKSFORME
due to lack of needed information.

For more information about our bug triaging procedures please read the
wiki located here:
https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging

Thank you for helping us make KDE software even better for everyone!
Comment 82 Germano Massullo 2020-08-08 19:22:19 UTC
(In reply to Harald Sitter from comment #79)
> Actually on second thought we'll not need gdb at all
> 
> - close all running instances of dolphin
> - open ksysguard
> - search for 'thumb'
> - if thumbnail or thumbnail.so or anything similar is running make sure to
> terminate it before proceeding
> - once it is closed, run
> - KDE_FORK_SLAVES=1 dolphin --new-window /the/path/of/our/mount
> - scroll around a bit
> - check ksysguard again
> - if thumbnail.so is running (again) then previews are being generated
> somehow
> 
> should thumbnail.so not get started again then previews are disabled
> otherwise they somehow are enabled (in which case we know its the previewing
> starving the IO).

I followed the procedure and no thumb* process has ever appeared on processes list
Comment 83 Germano Massullo 2020-10-27 23:06:52 UTC
I have just updated to 5.75 and things seemed to be improved, but I cannot do much tests due this Dolphin crash bug -> https://bugs.kde.org/show_bug.cgi?id=427118
Comment 84 Germano Massullo 2020-11-09 17:29:33 UTC
(In reply to Germano Massullo from comment #83)
> I have just updated to 5.75 and things seemed to be improved, but I cannot
> do much tests due this Dolphin crash bug ->
> https://bugs.kde.org/show_bug.cgi?id=427118

After days of testing I can say that the problem still exists
Comment 85 Stef Bon 2020-11-09 20:27:23 UTC
Hi,

it's a long time since I've looked at the problem.
What has been changed since then? I saw that there is not a seperation between the default attributes like size, permissions and owner/group and c/mtimes, and more in depth information like mimetype, which require much more io.
What I beleive I've mentioned before is that these lookup processes should be handled seperated: so some threads do the lookup of default attributes, and others do the determing of the mimetypes/icon etc.
This can be done with different queues of "lookup" tasks, and when doing the lookups of mimetype.icon takes too much time, it can switch over to do a much more simple lookup by checking the extension for example. That saves a lot of io. But I've seen the code and it's horrible to do this imo.
Stef
Comment 86 Germano Massullo 2020-11-14 07:46:44 UTC
(In reply to Stef Bon from comment #85)
> Hi,
> 
> it's a long time since I've looked at the problem.
> What has been changed since then? I saw that there is not a seperation
> between the default attributes like size, permissions and owner/group and
> c/mtimes, and more in depth information like mimetype, which require much
> more io.
> What I beleive I've mentioned before is that these lookup processes should
> be handled seperated: so some threads do the lookup of default attributes,
> and others do the determing of the mimetypes/icon etc.
> This can be done with different queues of "lookup" tasks, and when doing the
> lookups of mimetype.icon takes too much time, it can switch over to do a
> much more simple lookup by checking the extension for example. That saves a
> lot of io. But I've seen the code and it's horrible to do this imo.
> Stef


Also, I think that an option to disable determing mimetypes/icon for remote mounts should be added. Imagine a mounted device that runs over a slow network
Comment 87 Stef Bon 2020-11-14 10:01:00 UTC
I agree. That should be an option. So give the user a choice:

lookups of mime/icon should be one of:

- do always, no matter what
- do heuristic, when too slow do either:
  - switch over to a simple determing of mime by looking at extension
    (in stead of the first x bytes and anlyzing them)
  - disable
- never do on network filesystems (which are?)

Stef
Comment 88 Stef Bon 2020-11-23 08:12:13 UTC
Now plans enough. We should start working on this issue, and then I mean really. Not only posting, but really start. I can write a bit c++, and think that the code is a mess, but maybe we can do a bit of a ceanup as well.
What about that?

Stef
Comment 89 Stef Bon 2020-12-03 03:05:29 UTC
Serious.
Germano and others. This has taken too far long.
We should team up and try to solve this issue.
I'm a writer of FUSE filesystems, now busy writing a ssh server, to provide a more flexible server than openssh one for fuse clients and io sollutions I also work on.
My speciallity is C, a bit of C++ (although I prefer C very much), FUSE, SSH, SFTP and network filesystems.
Can someone help me to address this issue?
Stef Bon

PS1 I think it's important to first analyze the problem thorough before doing anything, and then plan the action and keep coordinated
Comment 90 Germano Massullo 2020-12-04 12:02:32 UTC
(In reply to Stef Bon from comment #89)
> Serious.
> Germano and others. This has taken too far long.
> We should team up and try to solve this issue.

I volunteer as a tester but at the moment I don't have time to study from the scratch all KDE Framework infrastructure
Comment 91 Stef Bon 2020-12-08 10:36:05 UTC
Ok Germano,

better be honoust. I respect that. But I want some help.
For example where do I start to look for? I do not have to look at the classes for the ui for example. I can remember:
- dolpinpart.cpp
- kitemviews/{kfileitemmodel.cpp, kstandarditemmodel.cpp}

I believe. Can someone point me a bit in the right direction (to find out why it takes too long for certain (network) filesystems == places where there is a lot of io for determing the mimetype). If you can point me also to other sources like designnotes, important discussions etc please let me know.

And where can I write and share my idea's. That's not here. Is there a developerscorner or something simular for dolphin?

Thanks in advance,

Stef Bon
the Netherlands
Comment 92 Harald Sitter 2020-12-08 11:34:06 UTC
One would first need to understand why it blocks and for that you need to catch it in the act, as it were. When it locks up grab a backtrace with gdb and that should tell you where you need to look in the code. Alternatively you could try finding the blocking code as a hot spot via perf or hotspot or callgrind, but I'm not entirely certain you'll see it in the sea of otherwise unrelated but also expensive code paths.

That being said, in my investigation I've found numerous blocking paths and isolated them into standalone bug reports (they are all linked at the top in the see also section) so I'd encourge you to check them out lest you try to track down an already known problem. The unfortunate thing is that as I recall Germano's description of his dolphin setup wouldn't hit any of the code paths I've found as they all had to do with either the dolphin side bar or thumbnails. Simply put the only thing his dolphin actually does is stat urls for the dolphin file view and every once in a while update the free disk space info. Neither should be so slow as to cause micro blocking.
Comment 93 Stef Bon 2020-12-08 16:42:13 UTC
I know I have to see/eperience why it is running slow. I'm not using gdb to do run code, just to analyze a coredump. The way I follow code is adding extra logmessages to syslog which always give me enough info, well at least till now.
The info you point at is usefull Harald (the see-alsos).

Do you have also some information how the interaction is with the kio-slaves?

(As developer of FUSE filesystems I have something against these. It is far better to leave looking up of fileinfo to the kernel and a FUSE service. Then Dolphin can concentrate more on the way it presents the fileinfo. But it's the way it is.)

And is there a developerscorner where I can write down everything I see/discover/find/my opinion and discus?

Stef Bon
Comment 94 Harald Sitter 2020-12-09 07:29:26 UTC
(In reply to Stef Bon from comment #93)
> Do you have also some information how the interaction is with the kio-slaves?

That really depends on the actual code path that is slow. KIO may not even be involved, in fact if KIO slaves were involved we'd not be having this problem as they are all sporting async API by design. It's skipping KIO that is usually causing performance troubles. Notably there are problems with certain bits of code assuming that "local URI == fast" and consequently use sync POSIX API in the GUI thread e.g. https://bugs.kde.org/show_bug.cgi?id=423502

Pretty much all the bugs in the see also section are instances of that.

> And is there a developerscorner where I can write down everything I
> see/discover/find/my opinion and discus?

https://invent.kde.org/system/dolphin/-/issues respectively https://invent.kde.org/frameworks/kio/-/issues is where we keep notes/todos. For discussion it's often times best to hop on freenode #kde-devel or just tag the relevant people in the issues on invent.
Comment 95 Germano Massullo 2021-07-05 12:13:13 UTC
I attach this Dolphin GDB stacktrace upon suggestion of Elvis Angelaccio. It has been taken while Dolphin (or KIO) was slowing down during directory reading

$ gdb dolphin
Reading symbols from dolphin...
Reading symbols from /usr/lib/debug/usr/bin/dolphin-21.04.2-1.fc34.x86_64.debug...
(gdb) run
Starting program: /usr/bin/dolphin 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7fffe3619640 (LWP 317541)]
[New Thread 0x7fffe29f5640 (LWP 317542)]
kf.xmlgui: KActionCollection::setComponentName does not work on a KActionCollection containing actions! "dolphin"
Dwarf Error: Cannot not find DIE at 0xfdec [from module /usr/lib/debug/usr/lib64/libjxl.so.0.3.7-0.3.7-3.fc34.x86_64.debug]

[New Thread 0x7fffd8c4d640 (LWP 317543)]
[New Thread 0x7fffcdb72640 (LWP 317544)]
[New Thread 0x7fffcd371640 (LWP 317545)]
[New Thread 0x7fffccb70640 (LWP 317546)]
[New Thread 0x7fffbffff640 (LWP 317547)]
[New Thread 0x7fffbf7fe640 (LWP 317548)]
[New Thread 0x7fffbeffd640 (LWP 317549)]
[New Thread 0x7fffbe7fc640 (LWP 317550)]
[New Thread 0x7fffbdffb640 (LWP 317551)]
[New Thread 0x7fffbd7fa640 (LWP 317552)]
[New Thread 0x7fffbcff9640 (LWP 317553)]
[New Thread 0x7fff9bfff640 (LWP 317554)]
[New Thread 0x7fff9b7fe640 (LWP 317555)]
[New Thread 0x7fff9affd640 (LWP 317556)]
[New Thread 0x7fff9a7fc640 (LWP 317557)]
[New Thread 0x7fff99ffb640 (LWP 317558)]
[New Thread 0x7fff997fa640 (LWP 317559)]
[New Thread 0x7fff98ff9640 (LWP 317560)]
[New Thread 0x7fff7bfff640 (LWP 317561)]
[New Thread 0x7fff7b7fe640 (LWP 317562)]
[New Thread 0x7fff7affd640 (LWP 317563)]
[New Thread 0x7fff7a7fc640 (LWP 317564)]
[New Thread 0x7fff79ffb640 (LWP 317565)]
^C
Thread 1 "dolphin" received signal SIGINT, Interrupt.
0x00007ffff7dbf5bf in __GI___poll (fds=0x555555fc9220, nfds=11, timeout=14523) at ../sysdeps/unix/sysv/linux/poll.c:29
--Type <RET> for more, q to quit, c to continue without paging--
29        return SYSCALL_CANCEL (poll, fds, nfds, timeout);
(gdb) set height 0
(gdb) et print elements 0
Undefined command: "et".  Try "help".
(gdb) set print elements 0
(gdb) set print frame-arguments all
(gdb) thread apply all backtrace

Thread 26 (Thread 0x7fff79ffb640 (LWP 317565) "QThread"):
#0  __GI___getdents64 (fd=34, buf=buf@entry=0x7fff680154b0, nbytes=<optimized out>) at ../sysdeps/unix/sysv/linux/getdents64.c:32
#1  0x00007ffff7d929ed in __GI___readdir64 (dirp=0x7fff68015480) at ../sysdeps/unix/sysv/linux/readdir64.c:51
#2  0x00007ffff7c87b08 in walkDir(QString const&, bool, bool, dirent64*, unsigned int) [clone .constprop.0] (dirPath=@0x7fff79ffa5c8: {d = 0x7fff68004350}, countHiddenFiles=countHiddenFiles@entry=false, countDirectoriesOnly=countDirectoriesOnly@entry=false, allowedRecursiveLevel=allowedRecursiveLevel@entry=0, dirEntry=<optimized out>) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:41
#3  0x00007ffff7c87df5 in walkDir(QString const&, bool, bool, dirent64*, unsigned int) [clone .constprop.0] (dirPath=@0x555556293600: {d = 0x5555565eed50}, countHiddenFiles=countHiddenFiles@entry=false, countDirectoriesOnly=countDirectoriesOnly@entry=false, allowedRecursiveLevel=1, dirEntry=<optimized out>) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:80
#4  0x00007ffff7c4b67e in KDirectoryContentsCounterWorker::subItemsCount (options=<optimized out>, path=@0x555556293600: {d = 0x5555565eed50}) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:113
#5  KDirectoryContentsCounterWorker::countDirectoryContents (this=0x555555d27d30, path=@0x555556293600: {d = 0x5555565eed50}, options={i = 0}) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/kitemviews/private/kdirectorycontentscounterworker.cpp:121
#6  0x00007ffff5d4e809 in QObject::event (this=0x555555d27d30, e=0x55555663dbe0) at kernel/qobject.cpp:1314
#7  0x00007ffff68c1423 in QApplicationPrivate::notify_helper (this=<optimized out>, receiver=0x555555d27d30, e=0x55555663dbe0) at kernel/qapplication.cpp:3632
#8  0x00007ffff5d24098 in QCoreApplication::notifyInternal2 (receiver=0x555555d27d30, event=0x55555663dbe0) at kernel/qcoreapplication.cpp:1063
#9  0x00007ffff5d27606 in QCoreApplicationPrivate::sendPostedEvents (receiver=0x0, event_type=0, data=0x5555559837c0) at kernel/qcoreapplication.cpp:1817
#10 0x00007ffff5d75bf7 in postEventSourceDispatch (s=0x7fff68004fe0) at kernel/qeventdispatcher_glib.cpp:277
#11 0x00007ffff3c3f4cf in g_main_dispatch (context=0x7fff68000c20) at ../glib/gmain.c:3337
#12 g_main_context_dispatch (context=0x7fff68000c20) at ../glib/gmain.c:4055
#13 0x00007ffff3c934e8 in g_main_context_iterate.constprop.0 (context=context@entry=0x7fff68000c20, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4131
#14 0x00007ffff3c3cc03 in g_main_context_iteration (context=0x7fff68000c20, may_block=1) at ../glib/gmain.c:4196
#15 0x00007ffff5d75698 in QEventDispatcherGlib::processEvents (this=0x7fff68000b60, flags=<optimized out>) at kernel/qeventdispatcher_glib.cpp:423
#16 0x00007ffff5d22ab2 in QEventLoop::exec (this=this@entry=0x7fff79ffab90, flags=<optimized out>, flags@entry={i = 0}) at ../../include/QtCore/../../src/corelib/global/qflags.h:69
#17 0x00007ffff5b6625a in QThread::exec (this=<optimized out>) at ../../include/QtCore/../../src/corelib/global/qflags.h:121
#18 0x00007ffff5b67456 in QThreadPrivate::start (arg=0x555555d2cb40) at thread/qthread_unix.cpp:329
#19 0x00007ffff45eb299 in start_thread (arg=0x7fff79ffb640) at pthread_create.c:481
#20 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 25 (Thread 0x7fff7a7fc640 (LWP 317564) "Thread (pooled)"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555bca664, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7a7fbae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555bca664, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7a7fbae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7a7fbae0, clockid=1, mutex=0x555555bca610, cond=0x555555bca638) at pthread_cond_wait.c:504
#3  __pthread_cond_timedwait (cond=0x555555bca638, mutex=0x555555bca610, abstime=0x7fff7a7fbae0) at pthread_cond_wait.c:637
#4  0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 577900995, type = <optimized out>}, this=0x555555bca610) at thread/qwaitcondition_unix.cpp:136
#5  QWaitConditionPrivate::wait (deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, this=0x555555bca610) at thread/qwaitcondition_unix.cpp:144
#6  QWaitCondition::wait (this=this@entry=0x5555558e80c0, mutex=0x555555845348, deadline={t1 = 497849, t2 = 577900995, type = 1}) at thread/qwaitcondition_unix.cpp:225
#7  0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x5555558e80b0) at thread/qthreadpool.cpp:140
#8  0x00007ffff5b67456 in QThreadPrivate::start (arg=0x5555558e80b0) at thread/qthread_unix.cpp:329
#9  0x00007ffff45eb299 in start_thread (arg=0x7fff7a7fc640) at pthread_create.c:481
#10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 24 (Thread 0x7fff7affd640 (LWP 317563) "Thread (pooled)"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555558baeb0, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7affcae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555558baeb0, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7affcae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7affcae0, clockid=1, mutex=0x5555558bae60, cond=0x5555558bae88) at pthread_cond_wait.c:504
#3  __pthread_cond_timedwait (cond=0x5555558bae88, mutex=0x5555558bae60, abstime=0x7fff7affcae0) at pthread_cond_wait.c:637
#4  0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 474781028, type = <optimized out>}, this=0x5555558bae60) at thread/qwaitcondition_unix.cpp:136
#5  QWaitConditionPrivate::wait (deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, this=0x5555558bae60) at thread/qwaitcondition_unix.cpp:144
#6  QWaitCondition::wait (this=this@entry=0x5555558e8230, mutex=0x555555845348, deadline={t1 = 497857, t2 = 474781028, type = 1}) at thread/qwaitcondition_unix.cpp:225
#7  0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x5555558e8220) at thread/qthreadpool.cpp:140
#8  0x00007ffff5b67456 in QThreadPrivate::start (arg=0x5555558e8220) at thread/qthread_unix.cpp:329
#9  0x00007ffff45eb299 in start_thread (arg=0x7fff7affd640) at pthread_create.c:481
#10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 23 (Thread 0x7fff7b7fe640 (LWP 317562) "Thread (pooled)"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555558e8534, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7b7fdae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555558e8534, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7b7fdae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7b7fdae0, clockid=1, mutex=0x5555558e84e0, cond=0x5555558e8508) at pthread_cond_wait.c:504
#3  __pthread_cond_timedwait (cond=0x5555558e8508, mutex=0x5555558e84e0, abstime=0x7fff7b7fdae0) at pthread_cond_wait.c:637
#4  0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 577762414, type = <optimized out>}, this=0x5555558e84e0) at thread/qwaitcondition_unix.cpp:136
#5  QWaitConditionPrivate::wait (deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497849, t2 = <optimized out>, type = <optimized out>}, this=0x5555558e84e0) at thread/qwaitcondition_unix.cpp:144
#6  QWaitCondition::wait (this=this@entry=0x555555bca500, mutex=0x555555845348, deadline={t1 = 497849, t2 = 577762414, type = 1}) at thread/qwaitcondition_unix.cpp:225
#7  0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x555555bca4f0) at thread/qthreadpool.cpp:140
#8  0x00007ffff5b67456 in QThreadPrivate::start (arg=0x555555bca4f0) at thread/qthread_unix.cpp:329
#9  0x00007ffff45eb299 in start_thread (arg=0x7fff7b7fe640) at pthread_create.c:481
#10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 22 (Thread 0x7fff7bfff640 (LWP 317561) "Thread (pooled)"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555558c4340, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7bffeae0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555558c4340, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fff7bffeae0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f15c4 in __pthread_cond_wait_common (abstime=0x7fff7bffeae0, clockid=1, mutex=0x5555558c42f0, cond=0x5555558c4318) at pthread_cond_wait.c:504
#3  __pthread_cond_timedwait (cond=0x5555558c4318, mutex=0x5555558c42f0, abstime=0x7fff7bffeae0) at pthread_cond_wait.c:637
#4  0x00007ffff5b6cf2a in QWaitConditionPrivate::wait_relative (deadline={t1 = <optimized out>, t2 = 478540456, type = <optimized out>}, this=0x5555558c42f0) at thread/qwaitcondition_unix.cpp:136
#5  QWaitConditionPrivate::wait (deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, deadline={t1 = 497857, t2 = <optimized out>, type = <optimized out>}, this=0x5555558c42f0) at thread/qwaitcondition_unix.cpp:144
#6  QWaitCondition::wait (this=this@entry=0x555555bb6c60, mutex=0x555555845348, deadline={t1 = 497857, t2 = 478540456, type = 1}) at thread/qwaitcondition_unix.cpp:225
#7  0x00007ffff5b6a724 in QThreadPoolThread::run (this=0x555555bb6c50) at thread/qthreadpool.cpp:140
#8  0x00007ffff5b67456 in QThreadPrivate::start (arg=0x555555bb6c50) at thread/qthread_unix.cpp:329
#9  0x00007ffff45eb299 in start_thread (arg=0x7fff7bfff640) at pthread_create.c:481
#10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 21 (Thread 0x7fff98ff9640 (LWP 317560) "dolphin:shlo3"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2e120) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fff98ff9640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 20 (Thread 0x7fff997fa640 (LWP 317559) "dolphin:shlo2"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2dec0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fff997fa640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 19 (Thread 0x7fff99ffb640 (LWP 317558) "dolphin:shlo1"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2dc60) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fff99ffb640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 18 (Thread 0x7fff9a7fc640 (LWP 317557) "dolphin:shlo0"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29d78, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29d28, cond=0x555555c29d50) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29d50, mutex=0x555555c29d28) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29d28, cond=0x555555c29d50) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a4a0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fff9a7fc640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 17 (Thread 0x7fff9affd640 (LWP 317556) "dolphin:sh8"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a430) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fff9affd640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 16 (Thread 0x7fff9b7fe640 (LWP 317555) "dolphin:sh7"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a3f0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fff9b7fe640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 15 (Thread 0x7fff9bfff640 (LWP 317554) "dolphin:sh6"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a3b0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fff9bfff640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 14 (Thread 0x7fffbcff9640 (LWP 317553) "dolphin:sh5"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a370) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffbcff9640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 13 (Thread 0x7fffbd7fa640 (LWP 317552) "dolphin:sh4"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a330) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffbd7fa640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 12 (Thread 0x7fffbdffb640 (LWP 317551) "dolphin:sh3"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a2f0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffbdffb640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 11 (Thread 0x7fffbe7fc640 (LWP 317550) "dolphin:sh2"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c257d0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffbe7fc640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 10 (Thread 0x7fffbeffd640 (LWP 317549) "dolphin:sh1"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c2a2b0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffbeffd640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 9 (Thread 0x7fffbf7fe640 (LWP 317548) "dolphin:sh0"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c29670, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c29620, cond=0x555555c29648) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c29648, mutex=0x555555c29620) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c29620, cond=0x555555c29648) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c25790) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffbf7fe640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 8 (Thread 0x7fffbffff640 (LWP 317547) "dolphin:disk$3"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c279e0) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffbffff640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 7 (Thread 0x7fffccb70640 (LWP 317546) "dolphin:disk$2"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c27900) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffccb70640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 6 (Thread 0x7fffcd371640 (LWP 317545) "dolphin:disk$1"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c27880) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffcd371640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 5 (Thread 0x7fffcdb72640 (LWP 317544) "dolphin:disk$0"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5555556f9548, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5555556f94f8, cond=0x5555556f9520) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x5555556f9520, mutex=0x5555556f94f8) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x5555556f94f8, cond=0x5555556f9520) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c27940) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffcdb72640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 4 (Thread 0x7fffd8c4d640 (LWP 317543) "dolphin:cs0"):
#0  0x00007ffff45f7a8a in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x555555c26a70, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74
#1  0x00007ffff45f7aef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555555c26a70, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ../sysdeps/nptl/futex-internal.c:123
#2  0x00007ffff45f12c0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x555555c26a20, cond=0x555555c26a48) at pthread_cond_wait.c:504
#3  __pthread_cond_wait (cond=0x555555c26a48, mutex=0x555555c26a20) at pthread_cond_wait.c:619
#4  0x00007fffd8f2f0db in cnd_wait (mtx=0x555555c26a20, cond=0x555555c26a48) at ../include/c11/threads_posix.h:155
#5  util_queue_thread_func (input=input@entry=0x555555c25820) at ../src/util/u_queue.c:294
#6  0x00007fffd8f2eb9b in impl_thrd_routine (p=<optimized out>) at ../include/c11/threads_posix.h:87
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffd8c4d640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 3 (Thread 0x7fffe29f5640 (LWP 317542) "QDBusConnection"):
#0  0x00007ffff7dbf5bf in __GI___poll (fds=0x7fffd40154b0, nfds=4, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff3c9347c in g_main_context_poll (priority=<optimized out>, n_fds=4, fds=0x7fffd40154b0, timeout=<optimized out>, context=0x7fffd4000c20) at ../glib/gmain.c:4434
#2  g_main_context_iterate.constprop.0 (context=context@entry=0x7fffd4000c20, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126
#3  0x00007ffff3c3cc03 in g_main_context_iteration (context=0x7fffd4000c20, may_block=1) at ../glib/gmain.c:4196
#4  0x00007ffff5d75698 in QEventDispatcherGlib::processEvents (this=0x7fffd4000b60, flags=<optimized out>) at kernel/qeventdispatcher_glib.cpp:423
#5  0x00007ffff5d22ab2 in QEventLoop::exec (this=this@entry=0x7fffe29f4b60, flags=<optimized out>, flags@entry={i = 0}) at ../../include/QtCore/../../src/corelib/global/qflags.h:69
#6  0x00007ffff5b6625a in QThread::exec (this=this@entry=0x7ffff609d060 <(anonymous namespace)::Q_QGS__q_manager::innerFunction()::holder>) at ../../include/QtCore/../../src/corelib/global/qflags.h:121
#7  0x00007ffff6022b6b in QDBusConnectionManager::run (this=0x7ffff609d060 <(anonymous namespace)::Q_QGS__q_manager::innerFunction()::holder>) at qdbusconnection.cpp:179
#8  0x00007ffff5b67456 in QThreadPrivate::start (arg=0x7ffff609d060 <(anonymous namespace)::Q_QGS__q_manager::innerFunction()::holder>) at thread/qthread_unix.cpp:329
#9  0x00007ffff45eb299 in start_thread (arg=0x7fffe29f5640) at pthread_create.c:481
#10 0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7fffe3619640 (LWP 317541) "QXcbEventQueue"):
#0  0x00007ffff7dbf5bf in __GI___poll (fds=fds@entry=0x7fffe3618a88, nfds=nfds@entry=1, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff4615f42 in poll (__timeout=-1, __nfds=1, __fds=0x7fffe3618a88) at /usr/include/bits/poll2.h:47
#2  _xcb_conn_wait (c=0x55555558f530, vector=0x0, count=0x0, cond=<optimized out>) at /usr/src/debug/libxcb-1.13.1-7.fc34.x86_64/src/xcb_conn.c:479
#3  0x00007ffff46178fc in _xcb_conn_wait (count=0x0, vector=0x0, cond=0x55555558f570, c=0x55555558f530) at /usr/src/debug/libxcb-1.13.1-7.fc34.x86_64/src/xcb_conn.c:445
#4  xcb_wait_for_event (c=0x55555558f530) at /usr/src/debug/libxcb-1.13.1-7.fc34.x86_64/src/xcb_in.c:697
#5  0x00007fffe37210f7 in QXcbEventQueue::run (this=0x5555555a4020) at qxcbeventqueue.cpp:228
#6  0x00007ffff5b67456 in QThreadPrivate::start (arg=0x5555555a4020) at thread/qthread_unix.cpp:329
#7  0x00007ffff45eb299 in start_thread (arg=0x7fffe3619640) at pthread_create.c:481
#8  0x00007ffff7dca353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 1 (Thread 0x7ffff0d5c980 (LWP 317537) "dolphin"):
#0  0x00007ffff7dbf5bf in __GI___poll (fds=0x555555fc9220, nfds=11, timeout=14523) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff3c9347c in g_main_context_poll (priority=<optimized out>, n_fds=11, fds=0x555555fc9220, timeout=<optimized out>, context=0x7fffdc005000) at ../glib/gmain.c:4434
#2  g_main_context_iterate.constprop.0 (context=context@entry=0x7fffdc005000, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126
#3  0x00007ffff3c3cc03 in g_main_context_iteration (context=0x7fffdc005000, may_block=1) at ../glib/gmain.c:4196
#4  0x00007ffff5d75698 in QEventDispatcherGlib::processEvents (this=0x55555565e1a0, flags=<optimized out>) at kernel/qeventdispatcher_glib.cpp:423
#5  0x00007ffff5d22ab2 in QEventLoop::exec (this=this@entry=0x7fffffffd680, flags=<optimized out>, flags@entry={i = 0}) at ../../include/QtCore/../../src/corelib/global/qflags.h:69
#6  0x00007ffff5d2afe4 in QCoreApplication::exec () at ../../include/QtCore/../../src/corelib/global/qflags.h:121
#7  0x00007ffff6233c60 in QGuiApplication::exec () at kernel/qguiapplication.cpp:1860
#8  0x00007ffff68c1399 in QApplication::exec () at kernel/qapplication.cpp:2824
#9  0x00007ffff7ef119e in kdemain (argc=<optimized out>, argv=<optimized out>) at /usr/src/debug/dolphin-21.04.2-1.fc34.x86_64/src/main.cpp:222
#10 0x00007ffff7cf1b75 in __libc_start_main (main=0x555555555070 <main(int, char**)>, argc=1, argv=0x7fffffffd958, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffd948) at ../csu/libc-start.c:332
#11 0x00005555555550ae in _start ()
Comment 96 Harald Sitter 2021-07-06 11:39:41 UTC
I'm pretty sure you've not caught it being busy right there. The GUI thread is entirely idle.

It's curious though, the walker thread is running...

What happens if you remove the Size column from the view and restart dolphin? Does that make things more responsive?

If so, does tweaking the folder size display setting in dolphin's View Modes settings make any difference in performance? Also, what have you configured there for the size display?

Assuming the walker thread being active isn't a fluke my theory would be that either your network or the server can't cope with the load the size walking causes in addition to the regular activity for listing items (which would likely have some bogus blocking IO calls on the GUI thread due to this being a local path as mentioned in Comment #94)
Comment 97 Germano Massullo 2021-07-13 11:58:44 UTC
Created attachment 140022 [details]
dolphin gdb_1
Comment 98 Germano Massullo 2021-07-13 11:58:59 UTC
Created attachment 140023 [details]
dolphin gdb_2
Comment 99 Germano Massullo 2021-07-13 11:59:11 UTC
Created attachment 140024 [details]
dolphin gdb_3
Comment 100 Harald Sitter 2021-07-13 12:40:03 UTC
Please note the questions I've asked.
Comment 101 Germano Massullo 2022-09-27 12:29:50 UTC
I have VERY INTERESTING NEWS (see bottom part of the message)

(In reply to Harald Sitter from comment #96)
> I'm pretty sure you've not caught it being busy right there. The GUI thread
> is entirely idle.

I think it's the client system triggering too much I/O on the server because it tries to retrieve as much data as possible from the remote folders. This is not happening when using Krusader

> What happens if you remove the Size column from the view and restart dolphin? Does that make things more responsive?

I am still able to reproduce a situation where Dolphin is stuck and the server is loaded with a lot of I/O, so the anwer is "no, disabling size column does not make things more responsive"

> If so, does tweaking the folder size display setting in dolphin's View Modes settings make any difference in performance? Also, what have you configured there for the size display?

The size is already the smallest one and the view is "detailed view"

BUT!!

if instead I set Dolphin to use "Icon" view, the problem is no longer happening! The folders that used to "stuck" Dolphin are no longer causing troubles, the folder browsing is very smooth and also I don't see any massive I/O on the storage server

To make sure to be able to reliable reproduce/not reproduce the problem, in order to clear all caches, before each test I have ran following commands:

on the client:
$ fusermount -u mount_point
on the storage server:
# zpool export pool_name
# zpool import -d /dev/disk/by-id pool_name
 on the client:
$ sshfs username@ip_address:/pool_name/dataset mount_point
Comment 102 SoftExpert 2023-10-18 06:29:44 UTC
I would reference also the bug #454722 - Dolphin becomes frozen if nfs shares declared in fstab are not available - previously the mount point was just showing empty. Currently reproduced with Dolphin 23.08.2.
Comment 103 Pedro V 2024-02-02 16:25:29 UTC
(In reply to Germano Massullo from comment #101)
> I think it's the client system triggering too much I/O on the server because
> it tries to retrieve as much data as possible from the remote folders. This
> is not happening when using Krusader

Krusader is still not immune to such issues as others mentioned earlier already, it just gets significantly less information compared to Dolphin with default configuration which even goes into subdirectories, so it can be really excessive.

(In reply to Harald Sitter from comment #70)
> Alas, can't reproduce.

The key which was mentioned here already is high latency, that's a significant problem elsewhere too in KDE, mostly because:
- A whole lot of I/O operations are done one by one and with high latency that becomes really obvious. A simple example of that problem is observing the SFTP KIO slaving dealing with a directory with symbolic links over a high latency connection where an strace on sshd can show the stat(x) calls being issued rather slowly with the latency penalty.
- Apparently there's no progressive file listing, and getting information does block the GUI

Theoretically this doesn't even really need networking to reproduce, it's just easier with that as it adds more latency and I suspect that no helpful I/O scheduler can get in the way of getting high latency.

Currently I can experience this with high latency not caused by the network, but by accessing an HDD over NFS which is under heavy load not by just the test host which definitely makes it worse, but hammering it just with one host already makes the experience bad.
Do note that caching definitely gets in the way of reproducing the issue, so I'll address that.
Given the previously mentioned conditions, looking at a directory with 30k+ files where new files are slowly being created. Didn't measure first listing attempt, but likely that's not the best anyway, so let's assume a hot cache which gets the following experiences:
- `ls -la`: <1 s, reasonably fast
- Krusader: <2 s, still pretty decent although with the files on the top not changing, and starting scrolling starts to make the UI unresponsive. One large scroll with the mouse, and it's just gone for some time, although still just for seconds.
- Dolphin: ? s. At one point it starts showing the files, but due to the occasional creation of new files it never becomes usable, although it does show changes occasionally

There should be quite a few ways of reproducing even with let's say a local HDD being scrubbed, or worse, defragmented while testing.
The tricky part is that without file changes various caching strategies and even the I/O scheduler is likely to get in the way, but with file changes other bugs may be at play too:
- At least with Krusader the tracking of directory contents tend to fall apart mostly after heavy I/O until reboot. This most commonly affects NFS mounts for me, but happened multiple times already after handling directories with a ton of files. What I tend to notice is not all deleted files disappearing from the list. Not sure how it is to this issue, but mentioning as it may matter.
- Quite rare, but I just recently happened to have gam_server pegging a core, Krusader staying unresponsive until gam_server got killed. I'm not really familiar with Gamin, I'm not even sure if it's actually needed or I'd be better off removing it as it's apparently optional, but reading around, it seems to be a troublemaker for others too which could mess with testing.
Comment 104 unblended_icing552 2024-03-03 05:05:24 UTC
Navigating through CIFS (kernel mode; smb4k) mounted directory containing a large number of files is extremely slow (for over 500 files it can hang for minutes). However, navigating the same folder through KIO address ( smb://<server>/<path> ) is much faster (5 seconds).