Bug 371579

Summary: Kaffeine 2.0.5 unresponsive on certain DVB-S channels
Product: [Applications] kaffeine Reporter: Stefan Huebner <stehueb>
Component: generalAssignee: Mauro Carvalho Chehab <mchehab>
Status: RESOLVED FIXED    
Severity: normal CC: knossos456
Priority: NOR    
Version: 2.0.1   
Target Milestone: ---   
Platform: Mint (Ubuntu based)   
OS: Other   
Latest Commit: Version Fixed In:
Sentry Crash Report:

Description Stefan Huebner 2016-10-24 15:11:01 UTC
When watching certain DVB-S channels on ASTRA 19.2, kaffeine uses more than 100% CPU and does not respond any more.

For example, the issue is present on these transponders / channels:
11582 H, DVB-S2: "NDR FS HD"
12110 H, DVB-S: "MDR FS", "NDR FS", "rbb Berlin", "SWR Fernsehen" 

In this case, the output from kaffeine is, for example:

24-10-16 10:46:34.487 [Info	] Using built-in dvb device manager
24-10-16 10:46:34.799 [Info	] Found dvb device P14f188800070c12a: NXP TDA10071
[00007fb980000c48] vdpau_avcodec generic error: decoder profile not supported: 2
[00007fb9ac00b158] vdpau_avcodec generic error: decoder profile not supported: 2
[00000000020caeb8] core input error: ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 300 ms)
[00000000020caeb8] core input error: ES_OUT_RESET_PCR called
[00000000020caeb8] core input error: ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 301 ms)
[00000000020caeb8] core input error: ES_OUT_RESET_PCR called
[00000000020caeb8] core input error: ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 328 ms)
[00000000020caeb8] core input error: ES_OUT_RESET_PCR called
[00000000020caeb8] core input error: ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 376 ms)
[00000000020caeb8] core input error: ES_OUT_RESET_PCR called
[00000000020caeb8] core input error: ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 581 

The issue is never present on these transponders:
12188 H, DVB-S: "RTL Television"
12545 H, DVB-S: "ProSieben", "SAT.1"

It seems, the issue is caused by the big amount of EPG data on these channels, because when commenting out the loop that checks for existing EPG entries in  dvbepg.cpp - DvbEpgModel::addEntry(), the issue is not present.

//     for (Iterator it = entries.begin(); ConstIterator(it) != entries.constEnd(); it++) {
//    	 const DvbSharedEpgEntry &existingEntry = *it;
//
//    	 // Don't do anything if the event already exists
//    	 if (*existingEntry == entry)
//    		 return DvbSharedEpgEntry();
//
//    	 if (existingEntry->channel != entry.channel)
//    		 continue;
//
//    	 const QDateTime enEnd = existingEntry->begin.addSecs(QTime(0, 0, 0).secsTo(existingEntry->duration));
//
//    	 // A new event conflicts with an existing one
//    	 if (((entry.begin > existingEntry->begin) && (entry.begin < enEnd)) || ((end > existingEntry->begin) && end < enEnd)) {
//    		 Debug("removed", existingEntry);
//    		 it = removeEntry(it);
//    		 break;
//    	 }
//    	 // New event data for the same event
//    	 if (((entry.begin == existingEntry->begin) && (end == enEnd))) {
//    		 if (existingEntry->details.isEmpty() && !entry.details.isEmpty()) {
//    			 emit entryAboutToBeUpdated(existingEntry);
//    			 const_cast<DvbEpgEntry *>(existingEntry.constData())->details =
//    				 entry.details;
//    			 emit entryUpdated(existingEntry);
//    			 Debug("updated", existingEntry);
//    		 }
//    		 return existingEntry;
//    	 }
//     }


Reproducible: Always




System Info:
Linux Mint 18 KDE
CPU: 4 x Intel Core 2 Quad CPU Q8200 @ 2.33GHz
Memory: 7.8 GiB RAM
Graphics Card: NVIDIA Corporation G92 [GeForce GT 230 OEM]
Comment 1 knossos456 2016-11-19 16:14:13 UTC
Same problem here.
Seems to be a AGC problem, I have problems with lowests C/N and v 2.05 , was ok with 1.22
Comment 2 Mauro Carvalho Chehab 2016-12-27 18:31:42 UTC
(In reply to Stefan Huebner from comment #0)
> It seems, the issue is caused by the big amount of EPG data on these
> channels, because when commenting out the loop that checks for existing EPG
> entries in  dvbepg.cpp - DvbEpgModel::addEntry(), the issue is not present.

Not sure how to fix it. We could try to optimize the EPG code in Kaffeine,
for example using some hash table in order to avoid the Iterator loop on it. Not sure if this would be enough, though.

Another option would be to have an option that would disable EPG parsing, but that would remove an interesting functionality from Kaffeine.

(In reply to knossos456 from comment #1)
> Same problem here.
> Seems to be a AGC problem, I have problems with lowests C/N and v 2.05 , was
> ok with 1.22

Yeah, a low C/N could be the root cause, as it may start creating duplicated entries.
Comment 3 Mauro Carvalho Chehab 2017-02-16 11:27:12 UTC
(In reply to Mauro Carvalho Chehab from comment #2)
> (In reply to Stefan Huebner from comment #0)
> > It seems, the issue is caused by the big amount of EPG data on these
> > channels, because when commenting out the loop that checks for existing EPG
> > entries in  dvbepg.cpp - DvbEpgModel::addEntry(), the issue is not present.
> 
> Not sure how to fix it. We could try to optimize the EPG code in Kaffeine,
> for example using some hash table in order to avoid the Iterator loop on it.
> Not sure if this would be enough, though.
> 
> Another option would be to have an option that would disable EPG parsing,
> but that would remove an interesting functionality from Kaffeine.
> 
> (In reply to knossos456 from comment #1)
> > Same problem here.
> > Seems to be a AGC problem, I have problems with lowests C/N and v 2.05 , was
> > ok with 1.22
> 
> Yeah, a low C/N could be the root cause, as it may start creating duplicated
> entries.

I was able to reproduce the bug here with a new DVB-S2 provider.

The issue seems to happen even with high C/N signal. The problem is that the
algorithm that detects duplicated entries is O(n). So, on providers with a
big amount of entries, it becomes very slow, making Kaffeine irresponsible,
specially when the program guide window is opened.

That was causing frame loss and an overload on one of the CPUs:

top - 09:25:05 up 8 days, 16:44,  4 users,  load average: 0,62, 0,25, 0,21
Tasks: 343 total,   3 running, 340 sleeping,   0 stopped,   0 zombie
%Cpu0  :  8,3 us,  0,7 sy,  0,0 ni, 90,0 id,  0,0 wa,  0,0 hi,  1,0 si,  0,0 st
%Cpu1  :  6,6 us,  1,7 sy,  0,0 ni, 91,0 id,  0,0 wa,  0,0 hi,  0,7 si,  0,0 st
%Cpu2  : 99,0 us,  0,3 sy,  0,0 ni,  0,0 id,  0,0 wa,  0,7 hi,  0,0 si,  0,0 st
%Cpu3  :  5,3 us,  1,0 sy,  0,0 ni, 93,0 id,  0,0 wa,  0,3 hi,  0,3 si,  0,0 st
%Cpu4  :  7,0 us,  1,3 sy,  0,0 ni, 91,0 id,  0,0 wa,  0,3 hi,  0,3 si,  0,0 st
%Cpu5  :  7,0 us,  0,7 sy,  0,0 ni, 92,3 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st
%Cpu6  :  1,3 us,  0,0 sy,  0,0 ni, 96,0 id,  0,0 wa,  2,7 hi,  0,0 si,  0,0 st
%Cpu7  :  6,3 us,  1,3 sy,  0,0 ni, 92,0 id,  0,0 wa,  0,3 hi,  0,0 si,  0,0 st

I changed the logic to use a O(log n) algorithm, and simplified the logic
that detects duplicated entries. With that, on my machine, the CPU workload 
reduced a lot. With the program guide window opened, I'm now getting:

top - 09:13:42 up 8 days, 16:32,  4 users,  load average: 0,93, 0,54, 0,29
Tasks: 342 total,   1 running, 341 sleeping,   0 stopped,   0 zombie
%Cpu0  : 11,6 us,  0,7 sy,  0,0 ni, 87,1 id,  0,0 wa,  0,0 hi,  0,7 si,  0,0 st
%Cpu1  :  7,3 us,  1,0 sy,  0,0 ni, 91,0 id,  0,0 wa,  0,0 hi,  0,7 si,  0,0 st
%Cpu2  :  7,6 us,  1,0 sy,  0,0 ni, 90,7 id,  0,0 wa,  0,7 hi,  0,0 si,  0,0 st
%Cpu3  :  6,6 us,  1,0 sy,  0,0 ni, 92,1 id,  0,0 wa,  0,3 hi,  0,0 si,  0,0 st
%Cpu4  :  8,4 us,  1,3 sy,  0,0 ni, 89,3 id,  0,0 wa,  0,3 hi,  0,7 si,  0,0 st
%Cpu5  :  8,6 us,  1,3 sy,  0,0 ni, 89,4 id,  0,0 wa,  0,3 hi,  0,3 si,  0,0 st
%Cpu6  :  8,0 us,  1,3 sy,  0,0 ni, 87,0 id,  0,0 wa,  3,7 hi,  0,0 si,  0,0 st
%Cpu7  : 10,6 us,  1,0 sy,  0,0 ni, 87,4 id,  0,0 wa,  0,3 hi,  0,7 si,  0,0 st

With sounds reasonable.
Comment 4 Mauro Carvalho Chehab 2017-02-16 11:28:11 UTC
Bug fixed on this commit:
 https://commits.kde.org/kaffeine/6686f17ea8157fdc5ff62f828143f0b81f86786c
Comment 5 Stefan Huebner 2017-02-25 17:56:05 UTC
Great news. Thank you, Mauro.