Summary: | support scan operations needed by multimedia apps in IOSlaves | ||
---|---|---|---|
Product: | [Unmaintained] kio | Reporter: | Anders E. Andersen <andersa> |
Component: | general | Assignee: | David Faure <faure> |
Status: | RESOLVED WORKSFORME | ||
Severity: | wishlist | CC: | amitshah |
Priority: | NOR | ||
Version: | unspecified | ||
Target Milestone: | --- | ||
Platform: | Debian testing | ||
OS: | Linux | ||
Latest Commit: | Version Fixed In: | ||
Sentry Crash Report: |
Description
Anders E. Andersen
2004-01-10 12:45:57 UTC
I was just looking into this some more. Now I am no expert in the KIO api, but it seems to me like the first thing that needs to be done, is to expand the KIO namespace with a new global 'get()'-like function, that supports a starting the transfer from a specific position in the file, instead of from the start. There is no way to do this with any of the present functions as far as I can tell. This function then needs to be implemented in io-slaves where applicable. Just to mention a couple of protocols, which support this in theory: http1.1 supports this with ranged gets. smb supports scan operations, since it is a networked filesystem. > starting the transfer from a specific position in the file, instead of from the start.
Can already be done with getJob->addMetadata("resume", KIO::number(offset));
> > starting the transfer from a specific position in the file, instead of > > from the start. > > Can already be done with getJob->addMetadata("resume", > KIO::number(offset)); How does one find out what metadata is supported by a particular io-slave? Anders > How does one find out what metadata is supported by a particular io-slave?
Reading the source :)
kdelibs/kio/DESIGN.metadata can help for "what's available as metadata in general".
Ok then. That is some help at least. The actual framework for jumping around inside a file, instead of reading it from the start, seems to be in place. Can you change the resume key value once data transmission has begun? I would guess that the way multimedia apps should make use of kde's io-slaves, would be to use the KIO::multi_get() call, assuming that changes to the resume key value will be ignored once data transmission has begun. You would then have to rerequest the same file with a new resume value using MultiGetJob::Get(), to get data from a different position in the file. > Can you change the resume key value once data transmission has begun? No. Metadata exchange happens before the transfer. After that you would have to kill the job and start a new one, if you want to "jump" to another position. > I would guess that the way multimedia apps should make use of kde's io-slaves, > would be to use the KIO::multi_get() call, assuming that changes to the resume key > value will be ignored once data transmission has begun. > You would then have to rerequest the same file with a new resume value using MultiGetJob::Get(), to get data from a different position in the file. I don't think pipelining (multi_get) is much supported outside of kio_http. Well, it'll work for other protocols, but just the way a normal get would work. I'm not sure why you'd want multi_get anyway (this is mostly for HTML pages that want to download N things from http at the same time, but in your case you're interested in a single source, no?). Yes. Only worry I have is the potential overhead of making an intirely new job for each scan operation you want to perform. Will it be fast enough for it to not feel laggy in multimedia apps? I don't know what "scan" means here, in the context of "multimedia apps". But yes, a new job certainly sounds expensive since it might start a new slave, or the existing slave might have to download things again (e.g. over FTP/SMB). For fast seeks a cache is obviously needed. So why not use KIO the usual way (download all) and do that cache in the application? > I don't know what "scan" means here, in the context of "multimedia apps". I tend to use scan, seek and position interchangably. The kind of application I am thinking about specifically is media players like xine and mplayer. Or derivatives like kaffeine. > But yes, a new job certainly sounds expensive since it might start a new > slave, or the existing slave might have to download things again (e.g. over > FTP/SMB). That is why I though multi_get would be a better option. You wouldn't have to start a new job to seek to a new position in the file. (Again, I presume.. I really have no idea how or if it would work..) > For fast seeks a cache is obviously needed. So why not use KIO the usual Not a local cache, not necessarily. After all you can mount samba shares locally and play media files over the network transparently as if they were on a local disk. A local network is plenty fast enough for seeks to happen without being unacceptably laggy. If you seek a lot in the file, for instance by moving the position slider around eratically, you can get in trouble, but that is possible also with local files, if you are behaving badly enough. > way (download all) and do that cache in the application? We already have this with apps like kaffeine. The big issue is that it is just dreadfully slow to transfer an entire movie to local cache before the player can start playing it. And you wouldn't be able to move the position slider in the application/seek in the file. As the above example with samba shares shows, it shouldn't be necessary to use a local cache, so it seems to me that there is no reason, why it shouldn't be possible to implement it, within the framework of kde's ioslaves. Did I close this bug prematurely? :) Anders On Monday 07 March 2005 12:35, andersa@ellenshoej.dk wrote: > I tend to use scan, seek and position interchangably. OK, let's say seek then, because scan made me think of scanning images (kooka) :-) But my question was more along the lines of: how often do you need to seek when reading a given file. > That is why I though multi_get would be a better option. You wouldn't have to > start a new job to seek to a new position in the file. As I said multi_get is about parallelizing the downloading of multiple different files, so this looks completely unrelated to your purpose. > > For fast seeks a cache is obviously needed. So why not use KIO the usual > > Not a local cache, not necessarily. After all you can mount samba shares > locally and play media files over the network transparently as if they were > on a local disk. Well, that's still local in KDE terms, i.e. a file:/// URL. My point was: get it to be a file:/// URL by whichever means (e.g. KIO), and then you have normal seek operations without needing KIO. > We already have this with apps like kaffeine. The big issue is that it is just > dreadfully slow to transfer an entire movie to local cache before the player > can start playing it. And you wouldn't be able to move the position slider in > the application/seek in the file. As the above example with samba shares > shows, it shouldn't be necessary to use a local cache, so it seems to me that > there is no reason, why it shouldn't be possible to implement it, within the > framework of kde's ioslaves. OK, if seeking is only done when the user clicks on the slider, then it might be ok to start a new job on the given offset. It's not as if the format itself required jumping around all the time. > > That is why I though multi_get would be a better option. You wouldn't > > have to start a new job to seek to a new position in the file. > > As I said multi_get is about parallelizing the downloading of multiple > different files, so this looks completely unrelated to your purpose. Let's forget about that then. > > Not a local cache, not necessarily. After all you can mount samba shares > > locally and play media files over the network transparently as if they > > were on a local disk. > > Well, that's still local in KDE terms, i.e. a file:/// URL. > My point was: get it to be a file:/// URL by whichever means (e.g. KIO), > and then you have normal seek operations without needing KIO. We want to be able to play the file without having to mount anything. As I wrote, copying the file to a local cache results in unacceptably long launch times for almost any file. Even for mp3's as small as a couple of megabytes, you will soon get tired of your media player having to store it locally before playing starts. This is perhaps THE issue of this entire bug. > if seeking is only done when the user clicks on the slider, then it > might be ok to start a new job on the given offset. It's not as if the > format itself required jumping around all the time. Well there is only one way to find out, isn't there.. :) In any case, there is no reason why you shouldn't be able to stream it. For some files like mp3's with id3 tags you need to seek to the end of the file before playing starts, in order to fetch the id3 tag. Other media files have similar needs. But using 2-3 jobs to start streaming the file seems to me like an acceptable overhead. Whether the overhead is small enough to really allow random seeking after playing has started is another issue. I am not too sure about that, as a lot of seeks could potentially be made in a very short time. If the media player was careful about not flooding the system with seeks, I guess it would be acceptable. Some kind simple of step by step forward/backwards seeking should definitely be possible though. |