| Summary: | Extremely long load time on www.abclinuxu.cz | ||
|---|---|---|---|
| Product: | [Applications] konqueror | Reporter: | Michal Vyskocil <michal.vyskocil> |
| Component: | general | Assignee: | Konqueror Bugs <konqueror-bugs-null> |
| Status: | RESOLVED FIXED | ||
| Severity: | normal | CC: | david, jreznik, petr, tomas.linhart |
| Priority: | NOR | ||
| Version First Reported In: | unspecified | ||
| Target Milestone: | --- | ||
| Platform: | Compiled Sources | ||
| OS: | Linux | ||
| Latest Commit: | Version Fixed/Implemented In: | ||
| Sentry Crash Report: | |||
| Attachments: |
The text export from Wireshark
Konqueror debug log - waiting |
||
|
Description
Michal Vyskocil
2009-01-14 09:51:04 UTC
Created attachment 30233 [details] The text export from Wireshark I used a Wireshark to capture of HTTP communication between Konqueror and abclinuxu.cz site - http://www.abclinuxu.cz/blog/techblog/2009/1/entropa. The filter value was 'http and ip.addr==195.70.150.7'. You can see there is a big delay between every server response and another Konqueror's GET. Can confirm that. Very interesting. I can confirm it too in 4.1.96. Refresh works - page is loaded instantly! Created attachment 30275 [details]
Konqueror debug log - waiting
Also I have log after refresh instead waiting so if it helps I can upload too.
Potential fix below. The problem seems to occur because we get the 304 response + headers in the single read. Then, we parse the response, unget the headers, and then when trying to get the headers next do a blocking read. This make it return unget buffer first:
Index: http.cpp
===================================================================
--- http.cpp (revision 919318)
+++ http.cpp (working copy)
@@ -1861,6 +1861,8 @@
m_unreadBuf.clear();
}
+// Note: the implementation of unread/readBuffered assumes that unread will only
+// be used when there is extra data we don't want to handle, and not to wait for more data.
void HTTPProtocol::unread(char *buf, size_t size)
{
// implement LIFO (stack) semantics
@@ -1886,6 +1888,10 @@
buf[i] = m_unreadBuf.constData()[bufSize - i - 1];
}
m_unreadBuf.truncate(bufSize - bytesRead);
+
+ // if we have an unread buffer, return here, since we may already have enough data to
+ // complete the response, so we don't want to wait for more.
+ return bytesRead;
}
if (bytesRead < size) {
int rawRead = TCPSlaveBase::read(buf + bytesRead, size - bytesRead);
It works fine now, thanks man! (Btw, it's the `kioslave/http/http.cpp' file in kdelibs) @Maksim: thanks for the fix, now it's OK. It's possible to commit it to svn? SVN commit 926999 by orlovich: Make sure we don't do any extra blocking reads if we still have buffered data to process; fixes freezes on websites with extremely small reply headers, like abclinuxu.cz BUG:180631 M +6 -0 http.cpp WebSVN link: http://websvn.kde.org/?view=rev&revision=926999 |