Version: (using KDE Devel) Installed from: Compiled sources OS: Linux I have read about KVocTrain but found no indication that it supports learning of spoken language, only written. It would be very useful to have support for sound and images. Use cases for images: * Show an image of a house and the user must write the word for house in the target language. * Show the image of a molecule and the user must write the IUPAC-name of it. I want to use KVocTrain to practice spoken language. Therefore I need sound support. The data file must therefore contain a link to the sound. That link consists of a filename (with path or URL) and a time offset. I have data for a course and I want to use it with KVocTrain. There is a number of lessons. The data structure is as follows: table Lesson { -- The name of the lesson. Name_source : text, source language Name_target : text, target language Name_trans : text, target language, roman translitteration Name_source_snd : sound, source language Name_target_snd : sound, target language -- The actual content. Content : table with a phrases (see below) } table Phrase { Source : text, source language Target : text, target language Target_Trans : text, the target language, roman translitteration Source_snd : sound, source language Target_snd_slow : sound, target language, spoken slowly Target_snd : sound, target language } I want to use the data for example like this: * Hear the sound in the source language, try to say it, listen to the sound in the target language. * Read one of the text versions, try to say it, listen to the sound in the target language. After trying to say the phrase and hearing it, I tell the program wether I was right or wrong, so it knows if it should ask me about that phrase again in the next round. I wish that it would be possible to represent this data and use it as described in KVocTrain.
Where would the sounds come from? Use cases for images: * Show an image of a house and the user must write the word for house in the target language. => more for KLettres, check it please and send me your thoughts * Show the image of a molecule and the user must write the IUPAC-name of it. => Kalzium Or both uses would fit in KEduca.
Will be implemented in KDE 4.1
> Where would the sounds come from? The user can just borrow a CD with a language course at a library or buy one. He would just have to specify the position of each phrase on the medium (filename/track, time offset and length; this metadata could perhaps be shared online between users of the language course).
> * Show the image of a molecule and the user must write the IUPAC-name of it. > => Kalzium The point was not that the image is a molecule. It was just an example, like the house. The point was just to see something in an image and then write the name of it. Could be molecules, animals, kitchen tools or anything.
There's a very simple implementation showing images and playing sounds in SVN. Please note that Anne-Maries repley is almost a year old. It will be supported to assign soundfiles with entries. It will definitely not be possible to use cds the way you described. If you want to use this kind of material you will have to rip the cd and cut the soundfiles into pieces manually. That is simply out of the scope of Parley. I would rather suggest using wiktionary.org as sound source. I would like to see Parley users contribute some pronunciations in your mother tongue to them, this helps wiktionary and Parley will hopefully benefit from it as well.
> If you want to use this kind of material you will have to rip the cd and cut the soundfiles into pieces manually. Ripping it is OK but cutting it manually is not. The user should just have to rip the CDs and then download the metadata that someone has already created, or create it himself. (Specifying a position inside a sound file is just as natural and useful as for example specifying a position inside a HTML document with "file.html#position".)
Sound and image are in Parley, have fun.