Version: unknown (using KDE 3.1.2) Installed from: SuSE Compiler: gcc version 3.3 20030226 (prerelease) (SuSE Linux) OS: Linux (i686) release 2.4.21 When downloading a PDF file (for which I have configured konqueror to spawn acroread), two downloads happen, here is what I see in my access.log: 192.168.1.10 - - [05/Jul/2003:10:29:09 +0200] "GET /~aknaff/tmp/01.pdf HTTP/1.1" 200 392825 192.168.1.10 - - [05/Jul/2003:10:29:13 +0200] "GET /~aknaff/tmp/01.pdf HTTP/1.1" 200 180224 What seems to happen is that konqueror itself starts a download, reads just enough data to see the Mime header, and then pops up the dialog "save or launch application". If you click "Save", the all is ok: konqueror downloads the rest, and saves it. If you click "Open file", then konqueror just closes its handle, launches kfmexec to spawn the app, and kfmexec spawns a *new* download. IMHO, this is wasteful, as at least the beginning of the file (here 180224 bytes) has to be downloaded twice. (the log shown above is confusing: the first line is kfmexec's download (which actually starts second), whereas the second is the konqueror partial download (which starts first, but stops last, because konqueror holds on to the file descriptor in case the user clicks Save rather than Open) For comparison, Mozilla does the same thing with just one download. Moreover, the new download performed by kfmexec is done with mod_gzip switched off, leading to a larger download than would be optimal. 192.168.1.10 - - [05/Jul/2003:10:43:52 +0200] "hitchhiker.hitchhiker.org.lu GET /~aknaff/tmp/02.pdf HTTP/1.1" 200 151015 mod_gzip: OK In:392825 Out:151015:62pct. 192.168.1.10 - - [05/Jul/2003:10:43:57 +0200] "hitchhiker.hitchhiker.org.lu GET /~aknaff/tmp/02.pdf HTTP/1.1" 200 392825 mod_gzip: DECLINED:NO_ACCEPT_ENCODING In:0 Out:0:0pct. (This has the correct ordering, as mod_gzip logs at start of transfer, rather than end) The desired behaviour would be to have the browser (konqueror) perform the download to a temp directory, perform the Transfer-Encoding processing (unzip the file), and then call the external app on that temp file. The current behavior might make sense with streaming applications, which would handle their own download; however even in that case, it would be suboptimal, because kfmexec (which comes between konqueror and the app...) would still download the entire file before calling the app... -------- With an embedded application (kghostview, rather than acroread), there are still to downloads that happen, but now mod_gzip transfer encoding is accepted, at least: 192.168.1.10 - - [05/Jul/2003:10:48:07 +0200] "hitchhiker.hitchhiker.org.lu GET /~aknaff/tmp/04.pdf HTTP/1.1" 200 151015 mod_gzip: OK In:392825 Out:151015:62pct. 192.168.1.10 - - [05/Jul/2003:10:48:08 +0200] "hitchhiker.hitchhiker.org.lu GET /~aknaff/tmp/04.pdf HTTP/1.1" 200 151015 mod_gzip: OK In:392825 Out:151015:62pct. Weird
*** Bug 59102 has been marked as a duplicate of this bug. ***
*** Bug 57889 has been marked as a duplicate of this bug. ***
Issue still present in 3.1.95
I agree that this behavior is wasteful. It may also lead to data loss or unexpected behavior. For example, suppose the server deletes or modifies the file after serving it the first time. Then the original (what you are requesting) is lost. For now it seems the only way to avoid this is to save first then open manually afterwards. Just thought I'd chime in. Thanks a lot
What about using HEAD instead of GET? Or is that because there might be some servers out there not supporting HEAD. Actually, given that they'd make up <1% of Web servers, you could failover to GET for them... (it's just that HEAD just downloads the HTTP headers - which will give you the Content-Type:) Jason
Problem still happens in Konqueror 3.3.92
Problem still happens in Konqueror 3.4.91
*** Bug 137856 has been marked as a duplicate of this bug. ***
I can confirm this bug for 3.5.9 and trunk r803905.
This is the infamously elusive "put-slave-on-hold" bug that was recently fixed. Basically the IO scheduler was supposed to reuse the first connection that was put on hold for the second (kfmexec) request, but it did not because of a logic error. Anyhow, that is fixed for the most part in KDE 4.7. There are still a couple of cases, especially with ftp links, where this still won't work, i.e. the requests would be duplicated, because the fix applied caused other regressions. Anyhow, this should be fixed for 4.7.x series. Please note that even with this fix, the duplicate downloads can still happen if non-kde application proclaims it can handle an particular remote resource. That is because non-kde applications do not use KIO, KDE's IO layer. I am going to mark this as fixed for KDE 4.7.0. Feel free to reopen if the problem persists when that version is released.