Version: (using KDE 4.3.0) Installed from: SuSE RPMs As per the subject. My 2cents: - don't count number of remote files (or dirs) prior to actual transfer - the counting is done concurrently with the transfer.
What do you mean exactly? If you browse /usr/bin remotely using dolphin or konqueror this will of course take a lot of time.
Waiting for more details ...
Right, we need an example. Consider the following dir. structure at a remote host: data | +-- data-00/ | +-- data-01/ where data-{0,1} consists of hundreds (and thousand) of small files (<= 16KB). With "sftp -r user@remotehost:/data localdir", all files will be transferred immediately (judging from the progress bar). With Dolphin: 1. Browse to "sftp://user@remotehost:/data". Good thing is that Dolphin handles this very fast. 2. Copy "/data" to a local dir. Then watch dolphin spends time counting recursively the total number of files to be transferred. Only after the counting is finished, then the actual transfers commences. I guess the intention is to let Dolphin estimate whether there're sufficient free space at the local dir. Obviously this is a very nice feature to have (if my guess is true), though the current implementation can be optimized. Hopefully I've articulated the issue; otherwise, I'd be glad to provide more info as much as I can.
This is not a kio_sftp issue. It is the way the copying of files is designed. Reassigning to kioslave default.
I can confirm same performance issue using ftp:// kio with Dolphin. Having to upload ~3k files (Total only ~2MB) by simple drag&drop to the ftp server takes about *** 3-4 HOURS *** Doing the same job with FileZilla ~5-10 minutes. And I mean it - systray gives like 0.5-5KB/s transfer speed, while connection b/w localhost and the givven server is at least 500 kb/s. The biggest problem is that only one parallel transfer is being made with ftp:/ kioslave. While this would make sense uploading movies, a single connection for tiny files is just insane. One more thing - the speed decreases drastically after small portion of files are being copied. First ~100-200 (~5% of all files) files gets copied at ~100-300 kb/s, and then all traffic gets jammed.
Well, do you want speed or do you want data consistency. KDE uploads the file to a temporary file first and then overwrites the original if the upload was successful. This slows things down, but it ensures that you don't loose data. The kio_sftp is still a blocking API. So zip the files :)
Consistency is great, but how does it provide more consistency? Doing uploads in parallel should not harm any integrity of the data. Just had to do an upload of ~1k small files, 9 MB total. After 0.5 HOUR there was only 20% files copied, estimating 1.2 **DAYS** to finish. FileZilla did entire transfer in *LESS than 4 minutes*. I know, comparing these projects could seem rude, but fileZilla is an old FOSS project, that almost never leaves users with broken data. >> The kio_sftp is still a blocking API. Could a many parallel connections be used? Each one taking care of its own business (single file transfer)? >> So zip the files :) Unfortunately, Apache does not like zipped files :/
Can anyone do the following and let me know if the performance improves ? 1.) Quit all instances of Konqueror you are running. 2.) Press CTRL+ESC and check there are no instances of Konqueror running. Kill them if there are. 3.) Find and open the sftp.protocol file on your system for editing (probably as root user). 4.) Change the "maxInstances" property to 20 and the "maxInstancesPerHost" to 5. 5.) Save the changes and quit. Now retry the same download process.
See comment #8. (In reply to comment #7) > Consistency is great, but how does it provide more consistency? > > Doing uploads in parallel should not harm any integrity of the data. > > Just had to do an upload of ~1k small files, 9 MB total. After 0.5 HOUR there > was only 20% files copied, estimating 1.2 **DAYS** to finish. That should never be the case, even over a very slow connection. Does following the steps I outlined in comment #8 help you at all ? Otherwise, something else is really gone wrong. Unfortunately, I personally cannot duplicate this problem and I attempted to copy a much larger count and total size collection of files to an ftp server.
See comment #8 and #9