Version: 1.1.3 (using KDE 3.4.3, Kubuntu Package 4:3.4.3-0ubuntu1 ) Compiler: Target: i486-linux-gnu OS: Linux (i686) release 2.6.12-9-686 Akregator is a really nice tool to track the contents of websites over rss-feeds. Many thanks for writing such a nice tool. I would especially enjoy it to read news-articles in the subway but unluckily there is no network-connection :-) Akregator allows it to read the real article by clicking the link "complete text" (in German "Vollständiger Text") For offline-reading it would be very nice to fetch and cache the sites behind of these links automatically. (including pictures and other needed files)
I won't implement this to to Akregator. Akregator is a Feed Reader not an application mirroring the web for offline reading (although it can be used for offline reading (I also use it for that), that's not it's main purpose). If Akregator has powerful interfaces for plugins one day (who knows), one could add this as a plugin, but I don't want it in the core.
ok :-) I´m currently writing a small rss-cache/proxy in perl, which solves that problem for me :-) Best regards Marc Schoechlin
*** Bug 132665 has been marked as a duplicate of this bug. ***
Hi, because i missed a feature for offline reading of rss-feeds and the pages which are referenced by the feeds I wrote a small and generic toolchain for offline reading of rss-feeds :-) If you are interested, see here: http://www.256bit.org/rssoffline.shtml This tool works great with akregator - it´s caching logic currently not perfect, but it works for 95% of my usage very good for me. If in future such a feature will be integrated in akregator, maybe this code gives a good idea what is needed by users.
Hi, it is possible to write akregator plugins in python and where can if find example and interface-specifications for these plugins ?
No, Akregator isn't really plugin-friendly right now. There are only two interfaces usable for plugins: Storage: for alternative storage backends (e.g. sqlite) ArticleInterceptor: provides access to newly fetched articles. Could be used in theory to implement filters and the like, not of much use right now. I would like to improve this situation, but I can't promise anything - the port to akonadi and cleanup of the internal browser will already need a lot of my time for akregator and right I am the only one working on it.
Konqueror has a nice feature where you can archive a web page into a .war file (a gzipped tar of the page and all page elements). If akregator had access to the .war functionality, prefetching/offline support were pretty easy. And yes, I'd like to have that feature, too.
I think using this functionality is really good idea. Some things which would be nice to have for this enhancement: - Possibility to enable/disable prefetching of articles per feed - Automatic modification of the displayed feed to view alternative Links to the cached articles - Prefetching of all articles which a referenced by a rss-feed - Multithreaded fetching (configurable number of collector-threads) - Cache cleanup manager (drop archive after a defined number of days) The rss-offline-utils (http://www.256bit.org/rssoffline.shtml) will maybe give you a good impression of the requested functionality. (This tool is not perfect, because wget is not that perfect collecting pages and their requisites like the konqueror-war-files)
*** Bug 148894 has been marked as a duplicate of this bug. ***
*** This bug has been confirmed by popular vote. ***
It's also useful to prefetch article dependencies to improve the responsiveness of akregator while the user is reading articles. For this case, the prefetching only needs to happen while the user is actually reading articles. I would grab at least the next and previous articles that would be jumped to using the - and = keys, with a radius of something like 5 articles. This is a different use case from offline reading, but it applies to everyone, so I think it should be on by default if implemented in a way that is linked to user interaction. With either kind of prefetching, I think it's necessary to make sure a malicious feed can't cause akregator to spuriously use up lots of bandwidth.
Thank you for the bug report. As this report hasn't seen any changes in 5 years or more, we ask if you can please confirm that the issue still persists. If this bug is no longer persisting or relevant please change the status to resolved.