Bug 454907 - Flatpak Discord crashes on launch when launched from Plasma or KRunner when those were themselves launched using `kstart5`
Summary: Flatpak Discord crashes on launch when launched from Plasma or KRunner when t...
Status: RESOLVED WORKSFORME
Alias: None
Product: kde-cli-tools
Classification: Plasma
Component: general (other bugs)
Version First Reported In: master
Platform: Other Linux
: NOR normal
Target Milestone: ---
Assignee: Aleix Pol
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2022-06-05 21:34 UTC by Nate Graham
Modified: 2022-08-05 13:52 UTC (History)
1 user (show)

See Also:
Latest Commit:
Version Fixed In:
Sentry Crash Report:


Attachments
plasma env (4.62 KB, text/plain)
2022-06-05 21:56 UTC, Nate Graham
Details
krunner env (2.53 KB, text/plain)
2022-06-05 21:56 UTC, Nate Graham
Details
diff of the files (3.90 KB, text/plain)
2022-06-05 22:08 UTC, Nate Graham
Details
working krunner vs broken krunner differences (3.02 KB, text/plain)
2022-06-05 22:18 UTC, Nate Graham
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Nate Graham 2022-06-05 21:34:15 UTC
I have Discord installed from FlatHub. The .desktop file it installs has the following command in its exec= field:
> /usr/bin/flatpak run --branch=stable --arch=x86_64 --command=discord com.discordapp.Discord


When I click on Discord in Kickoff or pinned to my Task Manager, it starts to load, then crashes. Console output:

> [6 zypak-helper] Error retrieving version property: org.freedesktop.DBus.Error.ServiceUnknown: The name is not activatable
> [6 zypak-helper] Error retrieving supports property: org.freedesktop.DBus.Error.ServiceUnknown: The name is not activatable
> [6 zypak-helper] WARNING: Unknown portal version
> file:///home/nate/kde/usr/share/plasma/plasmoids/org.kde.plasma.taskmanager/contents/ui/Task.qml:384: Unable to assign [undefined] to QString
> libva error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)
> [2022-05-26 09:06:44.399] [132] (engine.cpp:231): Time: Thu May 26 09:06:44 2022 MDT logLevel:2
> thread '<unnamed>' panicked at 'failed printing to stdout: Broken pipe (os error 32)', library/std/src/io/stdio.rs:1201:9
> note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
> Cannot upload crash dump: failed to open
> --2022-05-26 09:06:44--  https://sentry.io/api/146342/minidump/?sentry_key=384ce4413de74fe0be270abe03b2b35a
> Resolving sentry.io (sentry.io)... 35.188.42.15
> Connecting to sentry.io (sentry.io)|35.[removed]. connected.
> HTTP request sent, awaiting response... 400 Bad Request
> 2022-05-26 09:06:45 ERROR 400: Bad Request.
> 
> 
> Unexpected crash report id length
> Failed to get crash dump id.
> Report Id: 
> [WebContents] crashed (reason: crashed, exitCode: 134)... reloading
> [2022-05-26 09:06:46.709] [157] (engine.cpp:231): Time: Thu May 26 09:06:46 2022 MDT logLevel:2
> thread '<unnamed>' panicked at 'failed printing to stdout: Broken pipe (os error 32)', library/std/src/io/stdio.rs:1201:9
> note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
> Cannot upload crash dump: failed to open
> --2022-05-26 09:06:46--  https://sentry.io/api/146342/minidump/?sentry_key=384ce4413de74fe0be270abe03b2b35a
> Resolving sentry.io (sentry.io)... 35.188.42.15
> Connecting to sentry.io (sentry.io)|35.[removed]. connected.
> HTTP request sent, awaiting response... 400 Bad Request
> 2022-05-26 09:06:47 ERROR 400: Bad Request.
> 
> 
> Unexpected crash report id length
> Failed to get crash dump id.
> Report Id: 
> [WebContents] double crashed (reason: crashed, exitCode: 134)... RIP =(
> [15 zypak-sandbox] Dropping 0x563d2b2d52c0 (3) because of connection closed
> [15 zypak-sandbox] Host is gone, preparing to exit...
> [15 zypak-sandbox] Quitting Zygote...


Backtrace:

> Core was generated by `/app/discord/Discord --type=renderer --autoplay-policy=no-user-gesture-required'.
> Program terminated with signal SIGABRT, Aborted.
> #0  0x00007f87e43b54bb in raise () from /usr/lib/x86_64-linux-gnu/libc.so.6
> [Current thread is 1 (Thread 0x7f87d7e9d640 (LWP 132))]
> (gdb) bt
> #0  0x00007f87e43b54bb in raise () at /usr/lib/x86_64-linux-gnu/libc.so.6
> #1  0x00007f87e439e877 in abort () at /usr/lib/x86_64-linux-gnu/libc.so.6
> #2  0x00007f87d8e8edc7 in panic_abort::__rust_start_panic::abort ()
>     at library/panic_abort/src/lib.rs:44
> #3  0x00007f87d8e8eda6 in panic_abort::__rust_start_panic () at library/panic_abort/src/lib.rs:39
> #4  0x00007f87d8e77f7c in std::panicking::rust_panic () at library/std/src/panicking.rs:654
> #5  0x00007f87d8e77e1b in std::panicking::rust_panic_with_hook ()
>     at library/std/src/panicking.rs:624
> #6  0x00007f87d8e77830 in std::panicking::begin_panic_handler::{closure#0} ()
>     at library/std/src/panicking.rs:502
> #7  0x00007f87d8e747c4 in std::sys_common::backtrace::__rust_end_short_backtrace<std::panicking::begin_panic_handler::{closure#0}, !> () at library/std/src/sys_common/backtrace.rs:139
> #8  0x00007f87d8e77799 in std::panicking::begin_panic_handler ()
>     at library/std/src/panicking.rs:498
> #9  0x00007f87d8325021 in core::panicking::panic_fmt () at library/core/src/panicking.rs:106
> #10 0x00007f87d8e63545 in std::io::stdio::print_to<std::io::stdio::Stdout> ()
>     at library/std/src/io/stdio.rs:1201
> #11 std::io::stdio::_print () at library/std/src/io/stdio.rs:1213
> #12 0x00007f87d8c9f994 in rtc_log_init ()
>     at /home/nate/.var/app/com.discordapp.Discord/config/discord/0.0.17/modules/discord_voice/discord_voice.node
> #13 0x00007f87d838b8e2 in discord::media::Engine::Engine(std::__1::shared_ptr<discord::uv::ThreadedEventLoop>, std::__1::unique_ptr<discord::media::AdmFactory, std::__1::default_delete<discord::media::AdmFactory> >, discord::media::EngineConfig const&) (this=Python Exception <class 'gdb.MemoryError'>: Cannot access memory at address 0x7f8700000001
> 
>     0xdf891410418, loop=#14 0x00007f87d8372339 in std::__1::__compressed_pair_elem<discord::media::Engine, 1, false>::__compressed_pair_elem<std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >&&, discord::media::EngineConfig const&, 0ul, 1ul, 2ul>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >&&, discord::media::EngineConfig const&>, std::__1::__tuple_indices<0ul, 1ul, 2ul>)
>     (this=0xdf891410418, __args=...)
>     at ../../discord_common/native/third_party/llvm/libcxx/include/memory:2195
> #15 std::__1::__compressed_pair<std::__1::allocator<discord::media::Engine>, discord::media::Engine>::__compressed_pair<std::__1::allocator<discord::media::Engine>&, std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >&&, discord::media::EngineConfig const&>(std::__1::piecewise_construct_t, std::__1::tuple<std::__1::allocator<discord::media::Engine>&>, std::__1::tuple<std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >&&, discord::media::EngineConfig const&>) (this=0xdf891410418, __first_args=..., __second_args=..., __pc=...)
>     at ../../discord_common/native/third_party/llvm/libcxx/include/memory:2298
> #16 std::__1::__shared_ptr_emplace<discord::media::Engine, std::__1::allocator<discord::media::Engine> >::__shared_ptr_emplace<std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >, discord::media::EngineConfig const&>(std::__1::allocator<discord::media::Engine>, std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >&&, discord::media::EngineConfig const&)
>     (this=0xdf891410400, __args=..., __args=<optimized out>, __args=<optimized out>, __a=...)
>     at ../../discord_common/native/third_party/llvm/libcxx/include/memory:3583
> #17 std::__1::make_shared<discord::media::Engine, std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >, discord::media::EngineConfig const&>(std::__1::shared_ptr<discord::uv::Threa--Type <RET> for more, q to quit, c to continue without paging--
> dedEventLoop>&, std::__1::unique_ptr<discord::media::AdmFactoryDefault, std::__1::default_delete<discord::media::AdmFactoryDefault> >&&, discord::media::EngineConfig const&)
>     (__args=..., __args=<optimized out>, __args=<optimized out>)
>     at ../../discord_common/native/third_party/llvm/libcxx/include/memory:4419
> #18 discord::MediaEngine::MediaEngine(std::__1::shared_ptr<discord::uv::ThreadedEventLoop>, discord::media::EngineConfig const&) (this=0xdf89107ebf0, loop=..., config=...)
>     at ../../discord_native_lib/src/media_engine.cpp:12
> #19 0x00007f87d836e3de in std::__1::make_unique<discord::MediaEngine, std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, discord::media::EngineConfig>(std::__1::shared_ptr<discord::uv::ThreadedEventLoop>&, discord::media::EngineConfig&&) (__args=..., __args=...)
>     at ../../discord_common/native/third_party/llvm/libcxx/include/memory:3043
> #20 Discord::Discord(DiscordConfig)::$_2::operator()() const (this=0xdf8910d87e8)
>     at ../../discord_native_lib/src/discord.cpp:70
> #21 discord::uv::Task<Discord::Discord(DiscordConfig)::$_2>::Run() (this=0xdf8910d87e0)
>     at ../../discord_common/native/uv/executor.h:29
> #22 0x00007f87d8367ed9 in discord::uv::Executor::ExecutePending() (this=0xdf890e24f00)
>     at ../../discord_common/native/uv/executor.cpp:140


If I launch Discord from KRunner, it launches successfully, with no crash.

If I launch discord by using Konsole to manually execute the exec line from the .desktop file (`/usr/bin/flatpak run --branch=stable --arch=x86_64 --command=discord com.discordapp.Discord`), it launches successfully, with no crash.

It also launches successfully with no crash if I simply run `flatpak run com.discordapp.Discord`

It seems like Kickoff and the Task Manager are launching it in a different way from KRunner, and this different way causes it to crash on launch for some reason.
Comment 1 David Edmundson 2022-06-05 21:45:19 UTC
Crashing clients are the fault of the client. We may be doing something different, but it crashing is still on the client.

As to why kickoff and krunner behave differently. the only difference could be the env of both processes.

Get and compare with 

cat /proc/`pidof plasmashell`/environ | tr '\0' '\n'
cat /proc/`pidof krunner`/environ | tr '\0' '\n'

at a time where things are breaking.
Comment 2 Nate Graham 2022-06-05 21:56:02 UTC
Created attachment 149493 [details]
plasma env
Comment 3 Nate Graham 2022-06-05 21:56:14 UTC
Created attachment 149494 [details]
krunner env
Comment 4 Nate Graham 2022-06-05 22:08:23 UTC
Created attachment 149498 [details]
diff of the files
Comment 5 Nate Graham 2022-06-05 22:18:30 UTC
Created attachment 149499 [details]
working krunner vs broken krunner differences

I noticed something interesting. When KRunner is working, if I quit it and restart it with `krunner --replace` in a Konsole window, then it's broken from krunner. I'm attaching a dump of the env differences between working krunner and broken krunner.
Comment 6 David Edmundson 2022-06-06 09:11:59 UTC
Can you test with just this change:

< PATH=/home/nate/kde/usr/bin:/home/nate/bin:/home/nate/.local/bin:/home/nate/bin:/usr/lib64/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
---
> PATH=/home/nate/bin:/home/nate/kde/usr/bin:/home/nate/bin:/home/nate/.local/bin:/home/nate/bin:/usr/lib64/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin

I'm not sure which way round is the right/wrong one
Comment 7 Nate Graham 2022-06-06 13:26:53 UTC
So this is interesting. When I set $PATH to either of those values and run krunner with kstart5, the problem manifests.

When I set $PATH to either of those values and run krunner with the `krunner` binary, the problem goes away. Both $PATH values work. So $PATH doesn't seem relevant, and it's something else that kstart5 does or sets.
Comment 8 David Edmundson 2022-06-08 15:26:06 UTC
Have you confirmed "kstart5 flatpak-run .... " breaks?
Comment 9 Nate Graham 2022-06-08 15:29:48 UTC
I've discovered that it works in plasmashell too, until the moment when I have to start plasmashell again with kstart5 in a terminal. Thereafter, it doesn't work from Plasma either. If I start plasma manually with `plasmashell` it also works!

So the difference really seems to be in something that kstart5 is doing. Only when Plasma and KRunner are launched with kstart5 does using them to launch Discord cause it to crash on launch for me.
Comment 10 Nate Graham 2022-06-08 15:32:21 UTC
(In reply to David Edmundson from comment #8)
> Have you confirmed "kstart5 flatpak-run .... " breaks?

Yes that breaks too.
Comment 11 hey 2022-08-05 07:53:07 UTC
This may only be tangentially related, but I'm having a similar problem when running KeePassXC from Flatpak via Keyboard shortcut, which uses KGlobalAccel, which in turn also uses kstart to run the desktop file.

This is in the journal when I type the shortcut:

    kwin_wayland_wrapper[10898]: kstart: Unknown options: branch, arch, command, file-forwarding.

All the options kstart complains about are options to the flatpak run commandline:

    /usr/bin/flatpak run --branch=stable --arch=x86_64 --command=keepassxc --file-forwarding org.keepassxc.KeePassXC @@ %f @@

This has happened before [1], and the solution was to prefix the commandline with `--`.

I'm on Fedora Kinoite 36, kf5-kglobalaccel 5.96.0.

[1] https://bugs.kde.org/show_bug.cgi?id=433362
Comment 12 hey 2022-08-05 09:22:55 UTC
Disregard my recent comment, please; that issue is tracked by https://bugs.kde.org/show_bug.cgi?id=456589.
Comment 13 Nate Graham 2022-08-05 13:52:32 UTC
The issue has stopped reproducing for me; closing for now.