| Summary: | Kget fails to download directory-style HTTP URLs which break when index.html is appended | ||
|---|---|---|---|
| Product: | [Applications] kget | Reporter: | Stephan Sokolow <kde_bugzilla_2> |
| Component: | general | Assignee: | KGet bugs <kget-bugs-null> |
| Status: | RESOLVED FIXED | ||
| Severity: | normal | ||
| Priority: | NOR | ||
| Version First Reported In: | 0.8.5 | ||
| Target Milestone: | --- | ||
| Platform: | Compiled Sources | ||
| OS: | Linux | ||
| Latest Commit: | Version Fixed/Implemented In: | ||
| Sentry Crash Report: | |||
|
Description
Stephan Sokolow
2007-09-21 09:55:57 UTC
I'm not sure if I really understood right. You would like to download websites? Or files the have no filenames? Neither is truly correct, but "the files have no filenames" is reasonably close. Basically, if I feed http://www.foo.com/bar/ into KGet, it'll try to download http://www.foo.com/bar/index.html and, if my index file is named index.php, index.shtml, main.asp, or whatever.foo, it won't work. Same problem if the URL isn't actually a file path. (For example, Ruby on Rails routes, Pylons routes, many mod_rewrite rules, and PATH_INFO tricks like http://www.foo.com/bar.php/queryID/) *** Bug has been marked as fixed ***. Fixed in KDE4-trunk |