Bug 497873 - Neochat just stops updating, after random periods of time
Summary: Neochat just stops updating, after random periods of time
Status: RESOLVED NOT A BUG
Alias: None
Product: NeoChat
Classification: Applications
Component: General (other bugs)
Version First Reported In: 24.12.0
Platform: Flatpak Linux
: NOR normal
Target Milestone: ---
Assignee: Tobias Fella
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-12-24 21:42 UTC by Shawn W Dunn
Modified: 2025-03-29 01:05 UTC (History)
3 users (show)

See Also:
Latest Commit:
Version Fixed/Implemented In:
Sentry Crash Report:


Attachments
Output from journalctl (15.94 KB, text/x-log)
2025-01-02 19:00 UTC, Shawn W Dunn
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Shawn W Dunn 2024-12-24 21:42:17 UTC
SUMMARY
I have three accounts setup in neochat (chat.openSUSE.org, fedora.im, and matrix.org) and with the 24.12.* release, the homeservers just seem to stop updating after what appear to be random periods of time.

When I notice that I haven't gotten any new messages in any of the chats in a while, if I restart Neochat, I will most often find I have a number of messages that never appeared in my client.   The homeserver doesn't appear to matter, it happens with all three.

STEPS TO REPRODUCE
1. Open the neochat flatpak
2. Use it normally
3. After what feels like a random amount of time, everytime, my matrix channels seem to stop recieving new messages.

OBSERVED RESULT
Neochat reports no new messages in any of the chats

EXPECTED RESULT
Neochat reports new messages in chats as they come in.

SOFTWARE/OS VERSIONS
(available in the Info Center app, or by running `kinfo` in a terminal window)
Linux/KDE Plasma: openSUSE Kalpa (Tumbleweed), Linux-6.11.8
KDE Plasma Version: 6.2.4
KDE Frameworks Version: 6.9.1 (on the host) 6.8.0 (Flatpak Runtime)
Qt Version: 6.8.1

ADDITIONAL INFORMATION
Comment 1 John Kizer 2025-01-02 18:37:05 UTC
Hi - would you be able to check if anything shows up in your system journal (or possibly just terminal, when run from a terminal) when that happens?

Did this issue only begin with a recent version update, or have you noticed it for as long as you've used NeoChat?

I can't reproduce on Fedora KDE 41 using the RPM version, but I used the Flatpak version pretty extensively for a couple months and didn't notice it then, so I figured I'd check if it's something recent.

Thanks,
Comment 2 Shawn W Dunn 2025-01-02 19:00:23 UTC
Created attachment 177053 [details]
Output from journalctl
Comment 3 Shawn W Dunn 2025-01-02 19:02:08 UTC
I've attached the output of `journalctl -b | grep neochat` which is showing me a bunch of coredumps, I only recall one actual crash in this time period.

This behavior just started with the 24.12 release, it wasn't present in 24.08.

Just for clarity's sake, my flatpak is coming from flathub, and not from something like the fedora flatpak repo.
Comment 4 John Kizer 2025-01-03 03:56:12 UTC
Thanks!
Comment 5 Shawn W Dunn 2025-01-03 16:18:20 UTC
I don't know if it's *useful* or even relevant, but this morning, I started neochat from konsole, via `flatpak run -v org.kde.neochat` and just let it run, and I'm seeing the following output, when neochat seems to stop updating:

```
Adding a continuation to a future which already has a continuation. The existing continuation is overwritten.
quotient.jobs.sync: "SyncJob-1028" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-1028" stopped without ready network reply
quotient.jobs.sync: "SyncJob-1028": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-1055" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-1055" stopped without ready network reply
quotient.jobs.sync: "SyncJob-1055": retry #1 in 0 s
```
Comment 6 John Kizer 2025-01-03 18:33:56 UTC
Hmm...sorry if you already said this and I just missed, but, can you check what libquotient version you have in Settings > About NeoChat? Just wondering if that's somehow related, there have been a decent number of minor upgrades to that over the past couple months it seems?
Comment 7 Shawn W Dunn 2025-01-03 19:02:53 UTC
According to "About Neochat" 

NeoChat: 24.12.0
KDE Flatpak runtime (Wayland)
libQuotient: 0.9.2 (built against 0.9.2)
KDE Frameworks: 6.9.0
Qt: Using 6.8.1 and built against 6.8.1
Build ABI: x86_64-little_endian-lp64
Kernel: linux 6.12.6-1-default
Comment 8 Tobias Fella 2025-01-08 10:52:42 UTC
When these lines show up

quotient.jobs.sync: "SyncJob-1028" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-1028" stopped without ready network reply
quotient.jobs.sync: "SyncJob-1028": retry #1 in 0 s

is that "a few lines every 30 seconds" or "many lines each second"?
Comment 9 Shawn W Dunn 2025-01-08 16:23:37 UTC
This instance was all over the course of about 5sec (lack of timestamps and direct attention is making that an estimation)
```
quotient.jobs.sync: "SyncJob-192" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-192" stopped without ready network reply
quotient.jobs.sync: "SyncJob-192": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-204" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-204" stopped without ready network reply
quotient.jobs.sync: "SyncJob-204": retry #1 in 0 s
```

I do see right now, having the fedora homeserver open in both element, and neochat, I sent a message at 08:15 localtime, from neochat, that shows up in both places.

There's been three responses that have shown up in element at 08:17 and 08:18.   It's now 08:22 localtime, and there's nothing showing up in the terminal that's running `flatpak run -v org.kde.neochat` nor is there in the channel window in neochat itself.
Comment 10 Shawn W Dunn 2025-01-09 16:44:07 UTC
I don't have timestamps for this, but I left neochat running overnight, so this is over the span of about roughly nine hours, after a fresh restart.

quotient.jobs: "GetNotificationsJob" status Timeout: The job has timed out
quotient.jobs: "GetNotificationsJob" stopped without ready network reply
quotient.jobs: "GetNotificationsJob": retry #1 in 0 s
quotient.jobs: 503 <- GET https://fedora.ems.host/_matrix/client/v3/notifications
quotient.jobs: "GetNotificationsJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/notifications - server replied: Service Temporarily Unavailable
quotient.jobs: "GetNotificationsJob": retry #1 in 2 s
quotient.jobs: 503 <- GET https://fedora.ems.host/_matrix/client/v3/notifications
quotient.jobs: "GetNotificationsJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/notifications - server replied: Service Temporarily Unavailable
quotient.jobs: "GetNotificationsJob": retry #2 in 5 s
quotient.jobs: 503 <- GET https://fedora.ems.host/_matrix/client/v3/notifications
quotient.jobs: "GetNotificationsJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/notifications - server replied: Service Temporarily Unavailable
quotient.jobs: "GetNotificationsJob": retry #3 in 5 s
quotient.jobs: 503 <- GET https://fedora.ems.host/_matrix/client/v3/notifications
quotient.jobs: "GetNotificationsJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/notifications - server replied: Service Temporarily Unavailable
quotient.jobs.sync: 502 <- GET https://fedora.ems.host/_matrix/client/r0/sync
quotient.jobs.sync: "SyncJob-6265" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/r0/sync?filter=%7B%22account_data%22:%7B%7D,%22presence%22:%7B%7D,%22room%22:%7B%22account_data%22:%7B%7D,%22ephemeral%22:%7B%7D,%22state%22:%7B%22lazy_load_members%22:true%7D,%22timeline%22:%7B%22limit%22:100%7D%7D%7D&timeout=30000&since=s136665591_1_5309_89411533_4949418_2952781_1386697_52921240_0_3964 - server replied: Bad Gateway
quotient.jobs.sync: "SyncJob-6265": retry #1 in 2 s
quotient.jobs: 502 <- POST https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers
quotient.jobs: "SetReadMarkerJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers - server replied: Bad Gateway
quotient.jobs: "SetReadMarkerJob": retry #1 in 2 s
qrc:/qt/qml/org/kde/kirigamiaddons/labs/components/Avatar.qml:201:9: QML QQuickImage: unexpected error validating access token (https://fedora.ems.host/_matrix/client/v1/media/download/matrix.org/KEDQEquYphAuginKYbAMZrbD?timeout_ms=20000)
quotient.jobs: 503 <- POST https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers
quotient.jobs: "SetReadMarkerJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers - server replied: Service Temporarily Unavailable
quotient.jobs: "SetReadMarkerJob": retry #2 in 5 s
quotient.jobs: 503 <- POST https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers
quotient.jobs: "SetReadMarkerJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers - server replied: Service Temporarily Unavailable
quotient.jobs: "SetReadMarkerJob": retry #3 in 5 s
quotient.jobs: 503 <- POST https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers
quotient.jobs: "SetReadMarkerJob" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/v3/rooms/%21lyWraihOYkbPiiTeLj%3Akde.org/read_markers - server replied: Service Temporarily Unavailable
quotient.jobs.sync: 504 <- GET https://fedora.ems.host/_matrix/client/r0/sync
quotient.jobs.sync: "SyncJob-6265" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/r0/sync?filter=%7B%22account_data%22:%7B%7D,%22presence%22:%7B%7D,%22room%22:%7B%22account_data%22:%7B%7D,%22ephemeral%22:%7B%7D,%22state%22:%7B%22lazy_load_members%22:true%7D,%22timeline%22:%7B%22limit%22:100%7D%7D%7D&timeout=30000&since=s136665591_1_5309_89411533_4949418_2952781_1386697_52921240_0_3964 - server replied: Gateway Time-out
quotient.jobs.sync: "SyncJob-6265": retry #2 in 5 s
quotient.jobs.sync: 503 <- GET https://fedora.ems.host/_matrix/client/r0/sync
quotient.jobs.sync: "SyncJob-6265" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/r0/sync?filter=%7B%22account_data%22:%7B%7D,%22presence%22:%7B%7D,%22room%22:%7B%22account_data%22:%7B%7D,%22ephemeral%22:%7B%7D,%22state%22:%7B%22lazy_load_members%22:true%7D,%22timeline%22:%7B%22limit%22:100%7D%7D%7D&timeout=30000&since=s136665591_1_5309_89411533_4949418_2952781_1386697_52921240_0_3964 - server replied: Service Temporarily Unavailable
quotient.jobs.sync: "SyncJob-6265": retry #3 in 15 s
quotient.jobs.sync: 503 <- GET https://fedora.ems.host/_matrix/client/r0/sync
quotient.jobs.sync: "SyncJob-6265" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/r0/sync?filter=%7B%22account_data%22:%7B%7D,%22presence%22:%7B%7D,%22room%22:%7B%22account_data%22:%7B%7D,%22ephemeral%22:%7B%7D,%22state%22:%7B%22lazy_load_members%22:true%7D,%22timeline%22:%7B%22limit%22:100%7D%7D%7D&timeout=30000&since=s136665591_1_5309_89411533_4949418_2952781_1386697_52921240_0_3964 - server replied: Service Temporarily Unavailable
quotient.jobs.sync: "SyncJob-6265": retry #4 in 15 s
quotient.jobs.sync: 503 <- GET https://fedora.ems.host/_matrix/client/r0/sync
quotient.jobs.sync: "SyncJob-6265" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/r0/sync?filter=%7B%22account_data%22:%7B%7D,%22presence%22:%7B%7D,%22room%22:%7B%22account_data%22:%7B%7D,%22ephemeral%22:%7B%7D,%22state%22:%7B%22lazy_load_members%22:true%7D,%22timeline%22:%7B%22limit%22:100%7D%7D%7D&timeout=30000&since=s136665591_1_5309_89411533_4949418_2952781_1386697_52921240_0_3964 - server replied: Service Temporarily Unavailable
quotient.jobs.sync: "SyncJob-6265": retry #5 in 15 s
quotient.jobs.sync: 503 <- GET https://fedora.ems.host/_matrix/client/r0/sync
quotient.jobs.sync: "SyncJob-6265" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/r0/sync?filter=%7B%22account_data%22:%7B%7D,%22presence%22:%7B%7D,%22room%22:%7B%22account_data%22:%7B%7D,%22ephemeral%22:%7B%7D,%22state%22:%7B%22lazy_load_members%22:true%7D,%22timeline%22:%7B%22limit%22:100%7D%7D%7D&timeout=30000&since=s136665591_1_5309_89411533_4949418_2952781_1386697_52921240_0_3964 - server replied: Service Temporarily Unavailable
quotient.jobs.sync: "SyncJob-6265": retry #6 in 15 s
quotient.jobs.sync: 503 <- GET https://fedora.ems.host/_matrix/client/r0/sync
quotient.jobs.sync: "SyncJob-6265" status NetworkError: Error transferring https://fedora.ems.host/_matrix/client/r0/sync?filter=%7B%22account_data%22:%7B%7D,%22presence%22:%7B%7D,%22room%22:%7B%22account_data%22:%7B%7D,%22ephemeral%22:%7B%7D,%22state%22:%7B%22lazy_load_members%22:true%7D,%22timeline%22:%7B%22limit%22:100%7D%7D%7D&timeout=30000&since=s136665591_1_5309_89411533_4949418_2952781_1386697_52921240_0_3964 - server replied: Service Temporarily Unavailable
quotient.jobs.sync: "SyncJob-6265": retry #7 in 15 s
quotient.jobs: "QueryKeysJob" stopped without ready network reply
quotient.jobs.sync: "SyncJob-10074" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-10074" stopped without ready network reply
quotient.jobs.sync: "SyncJob-10074": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-11783" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-11783" stopped without ready network reply
quotient.jobs.sync: "SyncJob-11783": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-11838" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-11838" stopped without ready network reply
quotient.jobs.sync: "SyncJob-11838": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-12517" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-12517" stopped without ready network reply
quotient.jobs.sync: "SyncJob-12517": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-13353" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-13353" stopped without ready network reply
quotient.jobs.sync: "SyncJob-13353": retry #1 in 0 s
quotient.jobs: "GetNotificationsJob" status Timeout: The job has timed out
quotient.jobs: "GetNotificationsJob" stopped without ready network reply
quotient.jobs: "GetNotificationsJob": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-13467" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-13467" stopped without ready network reply
quotient.jobs.sync: "SyncJob-13467": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-13644" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-13644" stopped without ready network reply
quotient.jobs.sync: "SyncJob-13644": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-14169" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-14169" stopped without ready network reply
quotient.jobs.sync: "SyncJob-14169": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-14225" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-14225" stopped without ready network reply
quotient.jobs.sync: "SyncJob-14225": retry #1 in 0 s
quotient.jobs: "GetNotificationsJob" status Timeout: The job has timed out
quotient.jobs: "GetNotificationsJob" stopped without ready network reply
quotient.jobs: "GetNotificationsJob": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-14559" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-14559" stopped without ready network reply
quotient.jobs.sync: "SyncJob-14559": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-15466" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-15466" stopped without ready network reply
quotient.jobs.sync: "SyncJob-15466": retry #1 in 0 s
quotient.jobs: "GetNotificationsJob" status Timeout: The job has timed out
quotient.jobs: "GetNotificationsJob" stopped without ready network reply
quotient.jobs: "GetNotificationsJob": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-15853" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-15853" stopped without ready network reply
quotient.jobs.sync: "SyncJob-15853": retry #1 in 0 s
quotient.jobs: "GetNotificationsJob" status Timeout: The job has timed out
quotient.jobs: "GetNotificationsJob" stopped without ready network reply
quotient.jobs: "GetNotificationsJob": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-15987" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-15987" stopped without ready network reply
quotient.jobs.sync: "SyncJob-15987": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-15994" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-15994" stopped without ready network reply
quotient.jobs.sync: "SyncJob-15994": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-16400" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-16400" stopped without ready network reply
quotient.jobs.sync: "SyncJob-16400": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-16469" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-16469" stopped without ready network reply
quotient.jobs.sync: "SyncJob-16469": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-17902" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-17902" stopped without ready network reply
quotient.jobs.sync: "SyncJob-17902": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-17980" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-17980" stopped without ready network reply
quotient.jobs.sync: "SyncJob-17980": retry #1 in 0 s
quotient.jobs.sync: "SyncJob-18243" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-18243" stopped without ready network reply
quotient.jobs.sync: "SyncJob-18243": retry #1 in 0 s
qrc:/qt/qml/org/kde/neochat/qml/RoomPage.qml:214: TypeError: Cannot read property 'id' of null
quotient.events.members: Mismatched name in the room members list; avoiding the list corruption
quotient.jobs.sync: "SyncJob-18815" status Timeout: The job has timed out
quotient.jobs.sync: "SyncJob-18815" stopped without ready network reply
quotient.jobs.sync: "SyncJob-18815": retry #1 in 0 s
Comment 11 Shawn W Dunn 2025-01-11 17:45:31 UTC
I got the 24.12.1 version yesterday, and I'm still seeing the retries in the log, *but* it seems to be better.    I'm not declaring that it's *fixed* yet, but it doesn't seem to be stalling out like 24.12 was.

Let me run it for a day or two, and I'll report back.
Comment 12 Shawn W Dunn 2025-01-13 17:25:19 UTC
Unfortunately, still seeing this behavior in 2024.12.1
Comment 13 Shawn W Dunn 2025-03-29 01:05:48 UTC
So after some further investigation, this appears to be a problem with EMS hosted matrix servers, and not an issue with neochat.

I've been using alternative Matrix clients (element, iamb, fluffychat, nheko) and I'm seeing the same behavior with the fedora homeserver on *all* of them.

There's a bit of chatter I've seen out there on the web about the same issue with other EMS hosted servers.

So closing, to get this off your list of open bugs, as it's not a neochat problem.