Bug 500697 - [ANR] Kasts freezes at refreshing podcast feeds
Summary: [ANR] Kasts freezes at refreshing podcast feeds
Status: RESOLVED FIXED
Alias: None
Product: kasts
Classification: Applications
Component: general (other bugs)
Version First Reported In: unspecified
Platform: Debian unstable Linux
: NOR crash
Target Milestone: ---
Assignee: bart
URL:
Keywords: drkonqi
Depends on:
Blocks:
 
Reported: 2025-02-24 22:45 UTC by Josep Febrer
Modified: 2025-02-26 23:42 UTC (History)
0 users

See Also:
Latest Commit:
Version Fixed/Implemented In:
Sentry Crash Report: https://crash-reports.kde.org/organizations/kde/issues/138441/events/f18a4347dd174daba04b18f976c91444/


Attachments
New crash information added by DrKonqi (138.13 KB, text/plain)
2025-02-24 22:45 UTC, Josep Febrer
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Josep Febrer 2025-02-24 22:45:33 UTC
Application: kasts (25.03.70)

ApplicationNotResponding [ANR]: true
Qt Version: 6.7.2
Frameworks Version: 6.11.0
Operating System: Linux 6.13.4-josep1 x86_64
Windowing System: Wayland
Distribution: Debian GNU/Linux trixie/sid
DrKonqi: 6.3.0 [CoredumpBackend]

-- Information about the crash:
I builded Kasts from current master and when I refresh the podcast feed, it starts updating but at some point it freezes, and I have to force quitting it.
If i start Kasts from the terminal I spotted this error message while it's freezing:

Error happened: Error::Database "" "" 1 "INSERT INTO Entries VALUES (:feed, :id, :title, :content, :created, :updated, :link, :read, :new, :hasEnclosure, :image, :favorite);"

However on another PC with the same build Kasts works perfectly, so maybe it's specific to a database error.
I will attach here the problematic database.

The crash can be reproduced every time.

-- Backtrace (Reduced):
#5  0x00007f10798e4fb5 in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0, req=0x7fff0227d4a0, rem=0x7fff0227d4a0) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:48
#6  0x00007f10798f0373 in __GI___nanosleep (req=<optimized out>, rem=<optimized out>) at ../sysdeps/unix/sysv/linux/nanosleep.c:25
#7  0x00007f107a0c49b5 in qt_nanosleep (amount=...) at ./src/corelib/thread/qthread_unix.cpp:507
#8  QThread::sleep (nsec=std::chrono::duration = { <optimized out>ns }) at ./src/corelib/thread/qthread_unix.cpp:527
#9  0x00007f107a0c4a10 in QThread::usleep (usecs=<optimized out>) at ./src/corelib/thread/qthread_unix.cpp:522


Reported using DrKonqi
Comment 1 Josep Febrer 2025-02-24 22:45:34 UTC
Created attachment 178848 [details]
New crash information added by DrKonqi

DrKonqi auto-attaching complete backtrace.
Comment 2 Josep Febrer 2025-02-24 22:51:43 UTC
I couldn't attach here the Kasts database because it's too big, so I uploaded to another place which you can download from:

https://josepfebrer.com/nb/database.db3
Comment 3 bart 2025-02-25 18:58:06 UTC
Thanks for the report.

My first coarse analysis: Kasts is actually not really frozen, it's just re-trying failed database transactions with an increasing time between attempts. But, if there are many failing transactions, I can see how the total waiting time would explode, which makes it seem like it's completely unresponsive.
This is a new feature on the master branch, but I see now that the app probably needs to give some feedback when this happens.  Also, does the UI stay responsive, or does it also hang?  Since these db transactions are running in separate threads, I would expect the UI to stay responsive.

Anyway, it seems like the root cause is a "corruption" of your database, which explains why this doesn't happen on other systems.  I'll try and have a look at your database later to pinpoint why that has happened (so the cause can also be addressed, potentially).
Comment 4 Josep Febrer 2025-02-25 19:24:24 UTC
(In reply to bart from comment #3)
> Thanks for the report.
> 
> My first coarse analysis: Kasts is actually not really frozen, it's just
> re-trying failed database transactions with an increasing time between
> attempts. But, if there are many failing transactions, I can see how the
> total waiting time would explode, which makes it seem like it's completely
> unresponsive.
> This is a new feature on the master branch, but I see now that the app
> probably needs to give some feedback when this happens.  Also, does the UI
> stay responsive, or does it also hang?  Since these db transactions are
> running in separate threads, I would expect the UI to stay responsive.
> 
> Anyway, it seems like the root cause is a "corruption" of your database,
> which explains why this doesn't happen on other systems.  I'll try and have
> a look at your database later to pinpoint why that has happened (so the
> cause can also be addressed, potentially).

You are right that Kasts it's not frozen because on the terminal I see the database error repeating from time to time, so it's still trying.
But the UI is completely frozen and if I try to click on it after some time it will appear the unresponsive window dialog to force closing it.

BTW you are doing a great work with Kasts.
Comment 5 bart 2025-02-26 10:02:58 UTC
Ok, found the issue after loading your database. It seems you have a few podcasts with non-unique ids for the entries.
In principle this should never happen, but there seem to be quite a few podcasts out there that don't stick to the "rules".  These are mainly podcasts that release the same episodes in multiple formats (e.g. ogg vs mp3).  Unfortunately, the app and its database have been written based on the assumption that those ids are unique (which they should!).  Anyway, the app is trying to add new entries with ids that are already in the database.  This will result in (a lot of ) database errors.

Before the change on master that re-tries failed transactions, these would just fail and, in the worst case, you simply wouldn't see the failed, potentially duplicate episodes.
On master right now, it will keep on re-trying to add these episodes several times.  Since there's a lot of those non-unique ids, this will basically take forever.

Having said that, it just doesn't make sense retrying any type of db transaction error.  It only makes sense to retry errors due to the database being locked due to simultaneous writes.  So I'll push a change to the master branch to fix this later today.  It should fix your issues.

NB: Your db also triggered the realization that a few other improvements are sorely needed:
- I should do a major rewrite to not rely on the id being unique anymore. This might be tricky but is long overdue.
- After the feed update of your databse, there are about 11000 new episodes being added to the queue. These are currently added one by one, which is extremely inefficient since the UI is updated after every single addition.  This now takes several minutes and blocks the UI, so it might seem like the app is hanging.  In principle, I think this can be rewritten as a bulk action which should just take (a fraction of) a second.
Comment 6 Bug Janitor Service 2025-02-26 10:04:58 UTC
A possibly relevant merge request was started @ https://invent.kde.org/multimedia/kasts/-/merge_requests/262
Comment 7 bart 2025-02-26 10:07:59 UTC
Git commit f1157a5b5ccb8f68d0d4e9badc3c716ba6f7b918 by Bart De Vries.
Committed on 26/02/2025 at 10:03.
Pushed by bdevries into branch 'master'.

Only retry db fails if it's due to the db being locked

M  +2    -1    src/database.cpp

https://invent.kde.org/multimedia/kasts/-/commit/f1157a5b5ccb8f68d0d4e9badc3c716ba6f7b918
Comment 8 Josep Febrer 2025-02-26 23:42:56 UTC
(In reply to bart from comment #7)
> Git commit f1157a5b5ccb8f68d0d4e9badc3c716ba6f7b918 by Bart De Vries.
> Committed on 26/02/2025 at 10:03.
> Pushed by bdevries into branch 'master'.
> 
> Only retry db fails if it's due to the db being locked
> 
> M  +2    -1    src/database.cpp
> 
> https://invent.kde.org/multimedia/kasts/-/commit/
> f1157a5b5ccb8f68d0d4e9badc3c716ba6f7b918

Thank you! It worked!