Bug 368976

Summary: Make Check Data function multi-threaded
Product: [Applications] ktorrent Reporter: tukkek
Component: generalAssignee: Joris Guisson <joris.guisson>
Status: RESOLVED INTENTIONAL    
Severity: wishlist CC: andrius, cfeck
Priority: NOR    
Version: 4.3.1   
Target Milestone: ---   
Platform: Debian testing   
OS: Linux   
Latest Commit: Version Fixed In:

Description tukkek 2016-09-17 20:44:59 UTC
I can't say for sure but it seems that it only uses one CPU while performing a single or multiple torrent data checks. I've just resized my ktorrent-dedicated file-system partition and wanted to make sure that my download and seeding data hasn't been affected. I'm sure it would make routine data-checking (like when a corrupt chunk is detected) also faster during normal use. 

Of course, it's not something necessary but would be a nice addition :) Thank you for the great work. I've been using KTorrent for over a decade now and don't think any other client is quite on par with this amazing download manager!

Reproducible: Didn't try
Comment 1 Christoph Feck 2016-09-19 13:51:47 UTC
I am sure the disk read speed is the limit. On rotating disks, the disk read speed could even be lower, if multiple threads try to read from the same file at different positions concurrently.
Comment 2 tukkek 2016-09-19 16:53:25 UTC
Could very well be! As I've said I didn't even check the source to see if the procedure is indeed multi-threaded or not. 

I guess most systems would not be using expensive solid-state drives that would result in file-system I/O being less of a bottleneck, and even less probable that people are using RAID or multiple hard-drive mounts for their torrents - which means that indeed slow file reading would be the norm...

Seems I didn't think this through before opening the bug. Guess it was sort of optimistic thinking in my part that this could be improved, since the operation took a while to finish on my computer!
Comment 3 Andrius Štikonas 2017-12-10 14:11:51 UTC
Should we close this then? There isn't any manpower to work on this wishlist anyway. Fixing current bugs is probably more important.
Comment 4 tukkek 2017-12-13 18:39:23 UTC
Yes, it seems this wouldn't be the proper way to approach this, as Christopher points out. As the original author, I'm trying to close this although I'm not sure what's the proper way to do it. Please fix any workflow mistakes I might produce.