| Summary: | kgeography redirects to non-translated country page on wikipedia | ||
|---|---|---|---|
| Product: | [Applications] kgeography | Reporter: | alsadi <alsadi> |
| Component: | general | Assignee: | Albert Astals Cid <aacid> |
| Status: | CONFIRMED --- | ||
| Severity: | normal | CC: | lauranger, sh.yaron |
| Priority: | NOR | ||
| Version First Reported In: | unspecified | ||
| Target Milestone: | --- | ||
| Platform: | Fedora RPMs | ||
| OS: | Unspecified | ||
| Latest Commit: | Version Fixed/Implemented In: | ||
| Sentry Crash Report: | |||
|
Description
alsadi
2008-10-11 00:47:00 UTC
I disagree, the "bug" is that map is not translated, if it was it would show مصر already instead of Egypt so it would go to the correct page :-) I'm sure the Arabic team welcomes new contributors :-) hello, the page which is displayed is in wikipedia it must be taken from there for example if city X can be called Y or Z and kde translators call it Y while wikipedians call it Z then kgeography should open page named Z because page Y does not exists on wikipedia Yes, that's a possibility, but you can not ask translators to translate each city twice, that's simply inviable. So each time you as a user find this situation you can use the wikipedia power to create a redirection from Y to Z what you are suggesting is called duplicated effort just let the kgeography read what the wikipedians already done when displaying pages in wikipedia unless you want people here todo so much hard work mirroring wikipedia just think how many cities do we have on Earth times the number of locales And how do you exactly suggest we "read the wikipedia" ? that's trivial
echo -e "GET /wiki/Egypt HTTP/1.0\nHost: en.wikipedia.org\n\n" | cat | nc en.wikipedia.org 80 | perl -l -wne 'if (m|^\s*\<li[^>]*class="interwiki-ar"\>\<a[^>]href="([^"]+)"|) {print ${1}}'
in general
echo -e "GET /wiki/$PAGE HTTP/1.0\nHost: en.wikipedia.org\n\n" | cat | nc en.wikipedia.org 80 | perl -l -wne 'if (m|^\s*\<li[^>]*class="interwiki-$LANGXY"\>\<a[^>]href="([^"]+)"|) {print ${1}}'
of course I'm not suggesting this hack, I'm suggesting it's C/C++ implementation (maybe with QtNetwork)
Problems: * You do two network accesses instead of one * You depend on wikipedia not changing the internal structure of their pages * You can miss some pages because noone added the interwiki link Benefits: * For some corner cases you get a page you would not get otherwise I really don't see a total net benefit I said it's a hack > of course I'm not suggesting this hack there are many other ways see http://en.wikipedia.org/wiki/Wikipedia:Database_download#Why_not_just_retrieve_data_from_wikipedia.org_at_runtime.3F one can use http://en.wikipedia.org/wiki/Special:Export/Egypt to get raw unformatted XML then jump to <!--Other languages--> and catch [[ar:مصر]][[an:Echipto]]...etc. or doing an SQL query http://www.mediawiki.org/wiki/Manual:Database_layout maybe there are better ways > * You do two network accesses instead of one it's a small text, and it's better than getting many human work (cities*languages translations and fixes) > * You depend on wikipedia not changing the internal structure of their pages as I said that was just a proof of concept, you can use the raw xml or SQL > * You can miss some pages because noone added the interwiki link if the page is missing, just display the English wikipage, that's much better than doing the translation twice in wikipedia and kgeography and if a city has no link in the English page of wikipedia then most likely that that city does not Exists Laurent, what's your opinion on this? Hi. Hum, I fancy the idea, but ... (+) It would save a lot of (boring?) work from translation volunteers. (+) I could test KGeography without having to learn how to handle the i18n thing /o\ (+) This technic would allow for another request on this bug tracker to be fulfilled : one could choose at start of a game the language she wants to learn the names in. (-) Non continually connected users could not take advantage of previous point or we should manage a cache at each user $HOME I guess. (-) if we reserve this double loading to the wiki link (no cache), we introduce a time delay which would have us to handle asynchronously. (-) Egypt page is about 80KiB, just for a dozen bytes string. We'd need a cache and some cache policies available to configuration in order to let user choose how she wants to waste band-width ;-) I see quite some lines of code to have this. It might not be so soon I have it done. Any quicker volunteer ? Regards. Dear Bug Submitter, This bug has been stagnant for a long time. Could you help us out and re-test if the bug is valid in the latest version? I am setting the status to NEEDSINFO pending your response, please change the Status back to REPORTED when you respond. Thank you for helping us make KDE software even better for everyone! Dear Bug Submitter, This is a reminder that this bug has been stagnant for a long time. Could you help us out and re-test if the bug is valid in the latest version? Thank you for helping us make KDE software even better for everyone! Thank you for reporting this issue in KDE software. As it has been a while since this issue was reported, can we please ask you to see if you can reproduce the issue with a recent software version? If you can reproduce the issue, please change the status to "REPORTED" when replying. Thank you! Wikidata to the rescue. |