Created attachment 129500 [details] Demonstration of Dolphin becoming unresponsive while a FUSE mount is loading SUMMARY Opening [rclone](http://rclone.org/) or other FUSE-based mounts such as pCloud Drive makes Dolphin's UI unresponsive and completely unusable until the mount has finished loading. Other file managers such as Nautilus do not freeze when opening the exact same mounts. STEPS TO REPRODUCE 1. Add a remote to rclone such as OneDrive ([see rclone's documentation](https://rclone.org/onedrive/)) 2. Mount the remote 3. Open a folder on the remote that has many files in Dolphin 4. Wait for the remote folder to load and try to continue using Dolphin 5. Open the same folder in Nautilus 6. Try to use Nautilus while the folder loads OBSERVED RESULT Dolphin becomes unresponsive while the remote loads and no loading indicator is shown. Opening the same folder in Nautilus does not render it unresponsive and it correctly shows a loading indicator. EXPECTED RESULT Dolphin should remain responsive and show some sort of loading indicator while loading FUSE mounts just like it does for other remotes such as FTP. SOFTWARE/OS VERSIONS Dolphin Version: 19.12.3 Operating System: Kubuntu 20.04 KDE Plasma Version: 5.18.5 KDE Frameworks Version: 5.68.0 Qt Version: 5.12.8 Kernel Version: 5.4.0-37-generic ADDITIONAL INFORMATION Originally reported on Reddit https://www.reddit.com/r/kde/comments/haxpfg/dolphin_freezes_when_opening_rclone_mounts/
Dolphin believes FUSE mount points are local, because they don't use a remote protocol. I am not sure if it is possible to ask the kernel if a local file path is actually remote, and if KIO needs to be changed or the change needs to be made in Dolphin.
(In reply to Christoph Feck from comment #1) > Dolphin believes FUSE mount points are local, because they don't use a > remote protocol. I am not sure if it is possible to ask the kernel if a > local file path is actually remote, and if KIO needs to be changed or the > change needs to be made in Dolphin. Would it be worth looking into how Nautilus (or gvfs) handles it? Maybe it just assumes all mount points are remote and always adds a progress indicator? That might make sense since some old local drives can be quite slow to read. This issue will probably become more important once kio-fuse is rolled out as, from what I understand, it will use FUSE to mount Google Drive and other remote folders.
Of course gvfs knows if a mount point is local or not, because gvfs created it. The question is, how an application using this mount point can learn if it is a local or remote file system. BTW, kio-fuse does the reverse: It offers KIO mount points for non-KIO applications as local paths.
(In reply to Christoph Feck from comment #3) > Of course gvfs knows if a mount point is local or not, because gvfs created > it. The question is, how an application using this mount point can learn if > it is a local or remote file system. > > BTW, kio-fuse does the reverse: It offers KIO mount points for non-KIO > applications as local paths. Oh. I clearly still don't understand anything about how all the components in Linux work. I'll leave this to the professionals :)
(In reply to Christoph Feck from comment #3) > Of course gvfs knows if a mount point is local or not, because gvfs created > it. The question is, how an application using this mount point can learn if > it is a local or remote file system. > > BTW, kio-fuse does the reverse: It offers KIO mount points for non-KIO > applications as local paths. KIO has a function called isSlow(), but it can also cause blocking.
I can reproduce this bug with rclone. Using dolphin is basically impossible when working with large remote file systems using FUSE.
I just have a folder with 500 files in it, just navigating folders is a pain. I'm not even trying to download the actual files themselves. Just browsing
I have been running into this issue with using onedriver to access OneDrive. I had seen a work around was to use Nautilus, and tried that today. There the behavior was as expected: my OneDrive files load quickly and Nautilus remains responsive in the default view. However, when I tried to switch from the gird view to the list view in Nautilus, it too started to become unresponsive. So I opened Dolphin, switched to the grid view there (I tend to use the details view mode usually) , and the OneDrive folder works without Dolphin becoming unresponsive. It did slow down a little as various file previews were loading, but never became fully unresponsive. I hope this bit of information is useful in tracking down what the actual issue might be (something with pulling metadata needed for the details view?) and also maybe provides a work around for other affected users (just switch to grid view).
I use Gigolo to automatically mount my SAMBA share from my other laptop (via GVFS). When I inserted an external USB drive, KDE Plasma (5.27.11) froze - taskbar, widgets, desktop, Dolphin also. Since I could switch between open windows - not Dolphin - I switched to Gigolo and disconnected the SAMBA share and immediately KDE Plasma and Dolphin started working. I'm on Kubuntu 24.04.1 LTS and Dolphin 23.08.5. SSHFS doesn't cause trouble - also through Gigolo.
This affects *any* slow filesystem when accessed through conventional mounts (i.e. not explicitly via kio). e.g. Sleeping/spun-down mechanical disks. Any kind of broken/disconnected/unresponsive mount. NFS/CIFS/FUSE/SSHFS etc. mounted by any mechanism besides kio (fstab, autofs, gvfs, etc.). Any of these will cause dolphin to become completely unresponsive (and often other parts of plasma as well), regardless of whether the affected path is currently in view or in any way critical to system functionality (note *all* mounted disks spinning up when opening dolphin on an unrelated path). Better yet, this will cause all open dolphin instances to freeze and any attempt to open a new instance to hang until all mounts are available and loaded, rendering a core component of the DE completely unusable. This has been going on (with minor variations) for at least a decade now, and the root cause is deeper than dolphin. See #448361, #474403, #454722, #492815, #441077, and many more I CBF tracking down. This been regularly reported (and promptly buried and forgotten) for years.
*** Bug 448361 has been marked as a duplicate of this bug. ***
*** Bug 454722 has been marked as a duplicate of this bug. ***
*** Bug 273045 has been marked as a duplicate of this bug. ***
*** Bug 442684 has been marked as a duplicate of this bug. ***
And the reports keep rolling in... 450696, 474403 This is a major usablility problem, one unique to KDE, and a class of problem (UI threads and desktop components blocking on unrelated I/O) that was solved in other environments as far back as 1995. Is it ever going to get any attention whatsoever?
We are collecting related bug reports so that we can look into this issue.We hear your frustration. At this point, comments that don't add additional useful data don't add anything to this bug report. Please respect the subscribers to this report and the developers and keep comments data related. Thanks!
The title of this bug report relates to FUSE mounts, but from what I can tell the same issue impacts *any* kind of mount (in my case, both NFS and local disks). Is there something about this that's fundamentally different with FUSE?
(In reply to miranda from comment #17) > Is there something about this that's fundamentally different > with FUSE? No, AFAICT. rclone fuse mounts are just an obvious case of "high-latency filesystem that's not explicitly accessed via a kio:// path". Local filesystems cause the UI to freeze as well if there's noticeable latency (e.g. a disk spinup or a stalled I/O queue) involved. Frankly I have no idea why this particular bug report is getting all the attention, there are dozens of other more general reports on the same underlying problem to choose from.
It's not only dolphin, but also the complete kde desktop freezes (can't move the mouse). When an NFS share HD is slow to write (especially when the drive gets close to being full), dolphin will freeze, and lately it is also regularly freezing the complete desktop (so it got worse, because in the past only dolphin was freezing). As the network drive clear it's write queue, it unfreezes, then later freezes again as the drive queue is full, etc... This is with the latest updated versions of everything, on archlinux. The NFS shares are mounted with fstab.
(In reply to David Pearson from comment #8) > I have been running into this issue with using onedriver to access OneDrive. I had seen a work around was to use Nautilus, and tried that today. There the behavior was as expected: my OneDrive files load quickly and Nautilus remains responsive in the default view. Are you referring to https://github.com/jstaf/onedriver/issues/366#issue-1997500394? If so, I'll close that issue and direct its subscribers to this instead – no point putting the onus on the OneDriver developers. Additionally, does https://discuss.kde.org/t/dolphin-takes-a-long-time-to-reinitialise-when-it-last-had-a-fish-tab-open-that-points-to-an-unavailable-host/34436?u=rokejulianlockhart count as an example of this? IDK whether fish counts as a FUSE mount (and I'm hesitant to mention this at https://bugs.kde.org/show_bug.cgi?id=433611, despite their similarities, because it's not in FStab).
*** Bug 469533 has been marked as a duplicate of this bug. ***
*** Bug 492815 has been marked as a duplicate of this bug. ***
I am having this issue as well with a Synology NAS which is not available all the time (it automatically shuts down at midnight and is only woken up via WOL on demand). The NAS shares are in fstab, as user mounts. But if I do not umount all shares of this NAS before it shuts down, every File/Open dialog, and Dolphin, will freeze for minutes, on every file system access. "sudo umount -a -t cifs -l" on the command line fixes this, but is a dirty workaround. We need to uncouple all KIO accesses to mounts which *may* be unresponsive (i.e. user mounts?) from the thread that controls the UI. It must be possible to move away from an unresponsive folder inside Dolphin or a file dialog window *without* having to wait for a timeout - and even so, the timeouts should not be minutes long. Can I help debug this somehow, provide strace logs or test? I am using KDE Neon 24.04 with all KDE updates applied, so should be fairly recent.
(In reply to Jens from comment #23) If what you're seeing is indeed an example of https://discuss.kde.org/t/34436?u=rokejulianlockhart, and if you continue to reproducibly wait minutes, try launching Dolphin, ascertaining what its PID is (you can enable the relevant column in System Monitor's Processes page), attaching `strace -Tr -p $id`, and ASAP attempting to access that unresponsive external storage. During reproduction, check your terminal (this works best if you have the terminal and Dolphin vertically adjacent, both always visible irrespective of which window is focussed), and you should see not very much activity, or a lot of repetition. Wait until Dolphin eventually becomes responsive, then quickly kill Dolphin, and send over the syscall that shows via its timestamp that it took over a minute to elapse. By the way, if you're familiar with KDAB's Hotspot, `perf` might also record something useful. 🤷 I would do this myself, but reproduction is less consistent for me, when accessing a FISH URI for a shut-down VM.
(In reply to Roke Julian Lockhart Beedell from comment #24) > (In reply to Jens from comment #23) > > If what you're seeing is indeed an example of > https://discuss.kde.org/t/34436?u=rokejulianlockhart, and if you continue to > reproducibly wait minutes, try launching Dolphin, ascertaining what its PID > is (you can enable the relevant column in System Monitor's Processes page), > attaching `strace -Tr -p $id`, and ASAP attempting to access that > unresponsive external storage. During reproduction, check your terminal > (this works best if you have the terminal and Dolphin vertically adjacent, > both always visible irrespective of which window is focussed), and you > should see not very much activity, or a lot of repetition. Wait until > Dolphin eventually becomes responsive, then quickly kill Dolphin, and send > over the syscall that shows via its timestamp that it took over a minute to > elapse. By the way, if you're familiar with KDAB's Hotspot, `perf` might > also record something useful. 🤷 > > I would do this myself, but reproduction is less consistent for me, when > accessing a FISH URI for a shut-down VM. I hope this is helpful: I have a cifs mount that I made unavailable by killing the vpn connection needed. I copied everything that is over 0 or seemed related to me (total file size of the log is 21MB): 0.000048 newfstatat(AT_FDCWD, "/mnt/cifs_remote", 0x7ffd98a43fa0, 0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.445701> 20.445914 statfs("/mnt/cifs_remote", 0x7ffd98a43c80) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.479526> 20.479888 newfstatat(AT_FDCWD, "/mnt", {st_mode=S_IFDIR|0755, st_size=34, ...}, 0) = 0 <0.000038> 0.000171 statfs("/mnt", {f_type=BTRFS_SUPER_MAGIC, f_bsize=4096, f_blocks=61989965, f_bfree=15973207, f_bavail=15836102, f_files=0, f_ffree=0, f_fsid={val=[0x27bdcda2, 0xf3dd3043]}, f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_NOATIME}) = 0 <0.000037> 0.000145 inotify_add_watch(17, "/mnt", IN_MODIFY|IN_ATTRIB|IN_MOVED_FROM|IN_MOVED_TO|IN_CREATE|IN_DELETE|IN_DELETE_SELF|IN_MOVE_SELF|IN_DONT_FOLLOW) = 6 <0.000048> 0.000208 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000030> 0.000301 write(5, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000012> 0.000362 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY|O_CLOEXEC) = 25 <0.000035> 0.000089 fstat(25, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 <0.000017> 0.000063 read(25, "23 30 0:21 / /proc rw,nosuid,nod"..., 1024) = 1024 <0.000094> 0.000160 readlink("/proc", 0x7ffd98a43350, 1023) = -1 EINVAL (Das Argument ist ungültig) <0.000013> 0.000055 readlink("/proc/self", "4724", 1023) = 4 <0.000014> 0.000054 readlink("/proc/4724", 0x7ffd98a43350, 1023) = -1 EINVAL (Das Argument ist ungültig) <0.000012> 0.000049 readlink("/proc/4724/mountinfo", 0x7ffd98a43350, 1023) = -1 EINVAL (Das Argument ist ungültig) <0.000013> 0.000084 read(25, "osuid,nodev,noexec,relatime shar"..., 1024) = 1024 <0.000055> 0.000117 read(25, "nfigfs configfs rw\n45 26 0:106 /"..., 1024) = 1024 <0.000056> 0.000114 read(25, "rw,noatime shared:177 - btrfs /d"..., 1024) = 1024 <0.000047> 0.000100 read(25, "00\n693 48 0:158 / /home/mjb/Vide"..., 1024) = 569 <0.000038> 0.000086 read(25, "", 1024) = 0 <0.000011> 0.000048 close(25) = 0 <0.000014> 0.000048 getuid() = 1000 <0.000011> 0.000043 geteuid() = 1000 <0.000011> 0.000044 getgid() = 1000 <0.000010> 0.000042 getegid() = 1000 <0.000010> 0.000049 newfstatat(AT_FDCWD, "/run/mount/utab", 0x7ffd98a43a30, 0) = -1 ENOENT (Datei oder Verzeichnis nicht gefunden) <0.000019> 0.000269 newfstatat(AT_FDCWD, "/mnt/cifs_remote", 0x7ffd98a43b60, 0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.477056> 20.477257 statx(AT_FDCWD, "/home/mjb/.local/share/mime", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID|STATX_SUBVOL, stx_attributes=0, stx_mode=S_IFDIR|0700, stx_size=228, ...}) = 0 <0.000039> ... 0.000056 openat(AT_FDCWD, "/mnt/cifs_remote", O_RDONLY|O_CLOEXEC) = 25 <0.000018> 0.000057 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a436d0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.474056> 20.474231 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a43710) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.479600> 20.479738 read(25, 0x55819fbc52e0, 16384) = -1 EISDIR (Ist ein Verzeichnis) <0.000026> 0.000110 close(25) = 0 <0.000028> ... 0.000036 close(25) = 0 <0.000009> 0.000036 getuid() = 1000 <0.000007> 0.000034 geteuid() = 1000 <0.000009> 0.000036 getgid() = 1000 <0.000007> 0.000035 getegid() = 1000 <0.000008> 0.000034 prctl(PR_GET_DUMPABLE) = 1 (SUID_DUMP_USER) <0.000008> 0.000033 newfstatat(AT_FDCWD, "/run/mount/utab", 0x7ffd98a43910, 0) = -1 ENOENT (Datei oder Verzeichnis nicht gefunden) <0.000017> 0.000086 write(5, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000008> 0.000061 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY|O_CLOEXEC) = 25 <0.000011> 0.000038 fstat(25, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 <0.000007> 0.000034 read(25, "23 30 0:21 / /proc rw,nosuid,nod"..., 1024) = 1024 <0.000028> 0.000057 readlink("/proc", 0x7ffd98a42ff0, 1023) = -1 EINVAL (Das Argument ist ungültig) <0.000008> 0.000035 readlink("/proc/self", "4724", 1023) = 4 <0.000009> 0.000047 readlink("/proc/4724", 0x7ffd98a42ff0, 1023) = -1 EINVAL (Das Argument ist ungültig) <0.000008> 0.000035 readlink("/proc/4724/mountinfo", 0x7ffd98a42ff0, 1023) = -1 EINVAL (Das Argument ist ungültig) <0.000008> 0.000044 read(25, "osuid,nodev,noexec,relatime shar"..., 1024) = 1024 <0.000022> 0.000057 read(25, "nfigfs configfs rw\n45 26 0:106 /"..., 1024) = 1024 <0.000028> 0.000061 read(25, "rw,noatime shared:177 - btrfs /d"..., 1024) = 1024 <0.000020> 0.000054 read(25, "00\n693 48 0:158 / /home/mjb/Vide"..., 1024) = 569 <0.000014> 0.000044 read(25, "", 1024) = 0 <0.000007> 0.000032 close(25) = 0 <0.000008> 0.000033 getuid() = 1000 <0.000008> 0.000032 geteuid() = 1000 <0.000006> 0.000031 getgid() = 1000 <0.000008> 0.000033 getegid() = 1000 <0.000008> 0.000032 prctl(PR_GET_DUMPABLE) = 1 (SUID_DUMP_USER) <0.000007> 0.000032 newfstatat(AT_FDCWD, "/run/mount/utab", 0x7ffd98a436d0, 0) = -1 ENOENT (Datei oder Verzeichnis nicht gefunden) <0.000009> 0.000122 newfstatat(AT_FDCWD, "/mnt/cifs_remote", 0x7ffd98a43800, 0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.463392> 20.463638 statx(AT_FDCWD, "/home/mjb/.local/share/mime", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID|STATX_SUBVOL, stx_attributes=0, stx_mode=S_IFDIR|0700, stx_size=228, ...}) = 0 <0.000087> ... 0.000054 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a43370) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.473234> 20.473469 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a433b0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.479509> 20.479804 read(25, 0x55819fbc52e0, 16384) = -1 EISDIR (Ist ein Verzeichnis) <0.000059> 0.000203 close(25) = 0 <0.000045> ... 0.000054 newfstatat(AT_FDCWD, "/run/mount/utab", 0x7ffd98a43a30, 0) = -1 ENOENT (Datei oder Verzeichnis nicht gefunden) <0.000017> 0.000112 write(5, "\1\0\0\0\0\0\0\0", 8) = 8 <0.000012> 0.000113 newfstatat(AT_FDCWD, "/mnt/cifs_remote", 0x7ffd98a43b60, 0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.476714> 20.476934 statx(AT_FDCWD, "/home/mjb/.local/share/mime", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID|STATX_SUBVOL, stx_attributes=0, stx_mode=S_IFDIR|0700, stx_size=228, ...}) = 0 <0.000064> ... 0.000057 openat(AT_FDCWD, "/mnt/cifs_remote", O_RDONLY|O_CLOEXEC) = 25 <0.000016> 0.000057 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a436d0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.473839> 20.474072 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a43710) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.479725> 20.479953 read(25, 0x55819fbc52e0, 16384) = -1 EISDIR (Ist ein Verzeichnis) <0.000035> ... 0.000184 newfstatat(AT_FDCWD, "/mnt/cifs_remote", 0x7ffd98a44060, 0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.474079> 20.474318 statx(AT_FDCWD, "/home/mjb/.local/share/mime", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID|STATX_SUBVOL, stx_attributes=0, stx_mode=S_IFDIR|0700, stx_size=228, ...}) = 0 <0.000070> ... 0.000054 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a43bd0) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.474253> 20.474481 statx(25, "", AT_STATX_SYNC_AS_STAT|AT_NO_AUTOMOUNT|AT_EMPTY_PATH, STATX_ALL, 0x7ffd98a43c10) = -1 EHOSTDOWN (Der Rechner ist nicht aktiv) <20.479668> 20.479948 read(25, 0x55819fbc52e0, 16384) = -1 EISDIR (Ist ein Verzeichnis) <0.000033> 0.000135 close(25) = 0 <0.000036>
(In reply to janis.blechert from comment #25) Wow! If I've read that correctly, you've got some calls that last 20 seconds. Could you run `perf record --call-graph dwarf $(command -v dolphin)`, then exit Dolphin ASAP after it's done (that is, if dolphin was what you traced), then compress the `$HOME/perf.data` file (with Ark, 7Z's LZMA2 at maximum compression works well)? It might be too big to upload here (> 4 MiB), but if not, or if you can link it, that'd be brilliant too. Don't worry if it's too much hassle. That log is useful enough.
(In reply to Roke Julian Lockhart Beedell from comment #26) > (In reply to janis.blechert from comment #25) > > Wow! If I've read that correctly, you've got some calls that last 20 > seconds. Could you run `perf record --call-graph dwarf $(command -v > dolphin)`, then exit Dolphin ASAP after it's done (that is, if dolphin was > what you traced), then compress the `$HOME/perf.data` file (with Ark, 7Z's > LZMA2 at maximum compression works well)? It might be too big to upload here > (> 4 MiB), but if not, or if you can link it, that'd be brilliant too. Don't > worry if it's too much hassle. That log is useful enough. Yes I did trace dolphin. I'm glad to be of help: http://blechert.at/2025-05-31-dolphin-perf.data
Created attachment 181913 [details] Perf Data From Killed CIFS Mount (In reply to janis.blechert from comment #27) Uploaded the performance trace from a very demonstrable installation to prevent linkrot. Per https://discussion.fedoraproject.org/t/when-providing-a-performance-profile-should-the-user-provide-the-unprocessed-or-processed-data/154803, I'd provide a .perfparser file to save some time, but https://github.com/KDAB/hotspot/issues/728#issue-3104450855 is my enemy here, and the raw data is somehow small enough regardless.
Created attachment 181915 [details] Flamegraph of 2025-05-31-dolphin-perf.data I was able to load perf.data in hotspot to create this flamegraph. Hope it helps.
Created attachment 181917 [details] perfparser of 2025-05-31-dolphin-perf.data I could also save a .perfparser - Hope this is what you're looking for!
*** Bug 494935 has been marked as a duplicate of this bug. ***
*** Bug 505069 has been marked as a duplicate of this bug. ***
Copying in details from the duplicate report: ================================================================================= STEPS TO REPRODUCE 1. Ensure an SMB share is mounted via fstab. My fstab line is: //192.168.X.X/share_NAS /mnt/share_NAS cifs defaults,nofail,_netdev,credentials=/route/credentials 0 0 Scenario A: Dolphin is already open when the network connection to the SMB share is lost. 2. Open Dolphin (you can be Browse any location, local or remote). 3. While Dolphin is open, disconnect the network path to the SMB server (e.g., activate a VPN that makes the server unreachable, or shut down the SMB server). 4. Attempt to interact with Dolphin (e.g., click on any folder, local or remote; refresh the view; navigate back/forward; or simply click on Dolphin's window). Scenario B: Dolphin is not open when the network connection to the SMB share is lost. 2. With Dolphin closed, disconnect the network path to the SMB server (e.g., activate a VPN that makes the server unreachable, or shut down the SMB server). 3. Attempt to launch Dolphin. OBSERVED RESULT Scenario A: Dolphin is already open when the network connection to the SMB share is lost. Dolphin freezes completely for at least 1 minute, making the entire application unresponsive. Trying to interact with Dolphin (e.g., clicking on any folder, even local ones) causes it to freeze again for at least 1 minute for each interaction, even if it is with local locations. The interface often turning gray and displaying "(Not Responding)". This unresponsiveness affects the entire Dolphin application, including local file Browse. Scenario B: Dolphin is not open when the network connection to the SMB share is lost. Dolphin takes approximately 2 minutes and 40 seconds (2:40) to launch. After finally opening, the behavior described in Scenario A (freezing when interacting with any folder, even local ones) is observed. EXPECTED RESULT Dolphin should ideally handle the lost network connection gracefully. It should either: - Fail quickly with an error message (e.g., "Connection lost," "Share unreachable"). - Show a loading indicator without freezing the entire application, allowing the user to interact with other parts of Dolphin or the desktop. - Prompt the user with options to retry or cancel. SOFTWARE/OS VERSIONS Linux/KDE Plasma: Arch Linux KDE Plasma Version: 6.3.5 KDE Frameworks Version: 6.14.0 Qt Version: 6.9.0 Kernel Version: 6.14.7-arch2-1 (64 bits) Graphics Platform: Wayland ADDITIONAL INFORMATION This issue occurs on a fully updated Arch Linux system (sudo pacman -Syu has been performed). Troubleshooting steps already taken: - The soft mount option is already set in the fstab line. - net.ipv4.tcp_retries2 has been set to 3 via sysctl. - "Show previews for remote files" is disabled in Dolphin settings. - Other fstab options beyond the basic defaults,nofail,_netdev have been tested without resolving the issue. The full fstab line used is: //192.168.X.X/share_NAS /mnt/share_NAS cifs uid=1000,gid=1000,nosuid,nodev,nofail,_netdev,soft,x-systemd.automount,credentials=/route/credentials,x-gvfs-show,x-gvfs-name=share_NAS 0 0 Comparison with other file managers: - Nautilus: Does NOT freeze. It opens and functions normally with local locations. Attempting to access the affected network drives does not block the application; it only shows a "loading..." message, allowing the user to interact with the application immediately. - Thunar and Konqueror: Also freeze like Dolphin in this scenario. This appears to be a recurring issue for Dolphin/KIO when dealing with unavailable network shares. I've noted that older bug reports like ID 435940 (closed as WORKSFORME) and ID 316655 (RESOLVED UPSTREAM) are closed, but the problem persists in my current, updated system. I've also seen other active reports (e.g., ID 445065, ID 500671) describing similar freezing behavior with SMB in Dolphin. Please consider this report as a current manifestation or potential regression of this long-standing problem. =================================================================================
I'm raising this to VHI based on the age of the bug, number of people affected, and affect on the system