Before every change made on a btrfs partition, PM (partition manager) casually calls btrfsck with the --repair option, even though the btrfsck itself gives a warning, that it should never be called this way unless advised by developer. This option has a strong potential to finish the partition, making it unmountable. PM shouldn't call this (at least by default)
It was fixed a long time ago: https://invent.kde.org/system/kpmcore/-/commit/1feab7ae42ad330138b84429306b7501420254b7
(which probably means you didn't run version from git as was specified in this bug report)
Is there any chance of backporting this change? I'm using a live USB (LUbuntu 24.04, released 2025-08-05) and now am seriously concerned I've just killed a partition that was working perfectly fine just because I wanted more free space. On top of the default checking, there's no output, it's been running for two hours on a ~500G partition (from GParted or ext4 it's usually a few-minute job before resizing). The progress is 0% and I see it using one whole core and a lot of memory in htop but otherwise I have no idea what it's doing. I'm on PartitionManager 23.08.5 and from that tag, the readme says there aren't any open bugs for anything serious that could cause data loss. Admittedly the bug won't have been open at the time, but hopefully you see what I'm saying. Whilst maybe this will end up turning out fine, I'm quite concerned about severe data loss, and pretty recent distro releases (same month as the original date of this issue) evidently are shipping a version of PartitionManager that could be wiping out people's discs. I see some mention of similarish situations here and just through google where it may have been attempted through the CLI to worry it's a common enough situation. Could this issue be considered high enough of a concern to patch that one line back into older releases and try to get it through different distributions?
(In reply to stellarpower from comment #3) > Is there any chance of backporting this change? I'm using a live USB > (LUbuntu 24.04, released 2025-08-05) and now am seriously concerned I've > just killed a partition that was working perfectly fine just because I > wanted more free space. On top of the default checking, there's no output, > it's been running for two hours on a ~500G partition (from GParted or ext4 > it's usually a few-minute job before resizing). The progress is 0% and I see > it using one whole core and a lot of memory in htop but otherwise I have no > idea what it's doing. > > I'm on PartitionManager 23.08.5 and from that tag, the readme says there > aren't any open bugs for anything serious that could cause data loss. > Admittedly the bug won't have been open at the time, but hopefully you see > what I'm saying. Whilst maybe this will end up turning out fine, I'm quite > concerned about severe data loss, and pretty recent distro releases (same > month as the original date of this issue) evidently are shipping a version > of PartitionManager that could be wiping out people's discs. I see some > mention of similarish situations here and just through google where it may > have been attempted through the CLI to worry it's a common enough situation. > Could this issue be considered high enough of a concern to patch that one > line back into older releases and try to get it through different > distributions? Even if you run with --repair it's unlikely to cause any problems, e.g. I never had problems on my system. There is probably a bit of confirmation bias if you search online for it though. Anyway, it's up to distros to make backports. KDE does not do releases from such old branches anyway.
So even for e.g. security patches, are you saying it's up to the package maintainers to cherry-pick lines and merge those in? There aren't occasions where something is severe enough to go back to previous branches and cut a new release for downstream to take up? Just wanting to understand if that is the case as this would differ from some other projects. As of now it's been running for 24 hours for a 600G filesystem, and PM still at 0% complete with no output. I see some occasional disc activity from the check process but mostly sitting with stable RAM on 100% CPU and 0 for the disc. Was this normal for your systems? I get that it's been fixed in later releases but, in any case as an end user, most distros are a year or so behind and this has caused me pretty significant headaches. I have in effect had to take the day off work and do some housekeeping because there's nothing I can really do without my machine waiting for this (hopefully) to run to completion, all for the sake of seeing if a partition can be grown before attempting it. I asked on btrfs reddit what it was doing and (perhaps unsurprisingly) the replies ignored the real question and are mostly just never to run check --repair or you should always have had a backup image before attempting, so without any baseline will just have to see if it's been damaged or not. So I hope somebody somewhere can apply the patch, and would also be worried on the behalf of anyone who uses an older version of PM if it causes them worse issues. Usually when I have a dig out a live USB it's not likely to be that recent as I am only using it for occasional rescues, and that's when you'd definitely want your partition manager not to introduce further problems.
(In reply to stellarpower from comment #5) > So even for e.g. security patches, are you saying it's up to the package > maintainers to cherry-pick lines and merge those in? There aren't occasions > where something is severe enough to go back to previous branches and cut a > new release for downstream to take up? Just wanting to understand if that is > the case as this would differ from some other projects. Right now Partition Manager is part of automated KDE Gear yy.mm.patch release schedule, so you only have the standard monthly or so releases. Once yy.mm.3 release is out, there wouldn't be any releases of that branch. I think it's technically not even possible because we wouldn't even have a corresponding translation branch (at the moment translations are in SVN and there are master and stable branches there). Anyway, even before that, when KDE Partition Manager had standalone releases, I once created a bugfix release of current stable branch that fixed security bug with an assigned CVE. Luckily that bug didn't affect any of the earlier versions. All the distros either took a patch that fixed CVE or a new release except for Ubuntu. Still not fixed there https://bugs.launchpad.net/ubuntu/+source/kpmcore/+bug/1903774. Ubuntu 20.04 still has that local root privilege escalation. So given that you are on Ubuntu derivative, new releases won't help you, I don't think they care. fsck for 24 hours seem excessive... But again, it's unlikely that --repair causes it. If it's stuck, most likely just simple btrfs filesystem check would also get stuck. But unless your disk was already corrupted, I don't think killing btrfsck process would do any harm.
(In reply to Andrius Štikonas from comment #6) > (In reply to stellarpower from comment #5) > > So even for e.g. security patches, are you saying it's up to the package > > maintainers to cherry-pick lines and merge those in? There aren't occasions > > where something is severe enough to go back to previous branches and cut a > > new release for downstream to take up? Just wanting to understand if that is > > the case as this would differ from some other projects. > > Right now Partition Manager is part of automated KDE Gear yy.mm.patch > release schedule, so you only have the standard monthly or so releases. Once > yy.mm.3 release is out, there wouldn't be any releases of that branch. I > think it's technically not even possible because we wouldn't even have a > corresponding translation branch (at the moment translations are in SVN and > there are master and stable branches there). > > Anyway, even before that, when KDE Partition Manager had standalone > releases, I once created a bugfix release of current stable branch that > fixed security bug with an assigned CVE. Luckily that bug didn't affect any > of the earlier versions. All the distros either took a patch that fixed CVE > or a new release except for Ubuntu. Still not fixed there > https://bugs.launchpad.net/ubuntu/+source/kpmcore/+bug/1903774. Ubuntu 20.04 > still has that local root privilege escalation. So given that you are on > Ubuntu derivative, new releases won't help you, I don't think they care. > > fsck for 24 hours seem excessive... But again, it's unlikely that --repair > causes it. If it's stuck, most likely just simple btrfs filesystem check > would also get stuck. But unless your disk was already corrupted, I don't > think killing btrfsck process would do any harm. I see. The declining stability of Ubuntu is one reason I switched, as if the packages aren't tested thoroughly then I thought I would try a semi rolling-release as if it's gonna have bugs at least I can update easily and it might be fixed. In the meantime I managed to attach ptrace and once a minute or so I can see it's writing something and doing a sync. I've seen other threads mentioning taking a week or well over that to do a full repair, presumably these are much larger datasets but I don't know by how much. I can go and trace through the code but should any output from it be displayed immediately in PM? It should have spat out something on starting but the window is completely empty, not sure if it's buffered, or that is an issue that has since been fixed, or if it really is just spinning very early on.
I think for now we don't have live output which a regression after porting to polkit (so that we don't run GUI as root) :(. If I remember correctly it will show that it is running fsck but if you click advanced, there won't be output until process is done. Anyway, as far as partition manager is concerned, is just kicks off a process (btrfsck) and waits until it exits... Perhaps you could attach to that process and redirect stdout to a file, e.g. https://serverfault.com/questions/213119/how-can-i-redirect-an-already-running-processs-stdout-stderr
That's a shame. I assume it's more difficult then than polling [this](https://invent.kde.org/system/kpmcore/-/blob/master/src/util/externalcommand.cpp?ref_type=heads#L360). Anyway, thank you for the help. You were right, the check just went into a spin for two days saying the number of super bytes didn't match what was recorded (can't remember the exact phrase but I found similar reports online). I just killed it and all was well. Needless to say I won't try btrfs a second time!! By the way, this was nice to have: https://github.com/jerome-pouiller/reredirect
(In reply to stellarpower from comment #9) > Anyway, thank you for the help. You were right, the check just went into a > spin for two days saying the number of super bytes didn't match what was > recorded (can't remember the exact phrase but I found similar reports > online). I just killed it and all was well. Needless to say I won't try > btrfs a second time!! > I actually use btrfs in production, seems to work fine for me. snapshots and incremental backups are nice. You can also have compression / deduplication.
(In reply to stellarpower from comment #9) > That's a shame. I assume it's more difficult then than polling > [this](https://invent.kde.org/system/kpmcore/-/blob/master/src/util/ > externalcommand.cpp?ref_type=heads#L360). Well, polling is only half of the job. You also need to implement periodic sending here https://invent.kde.org/system/kpmcore/-/blob/master/src/util/externalcommandhelper.cpp#L413 There are actually more things that still need to be fixed in that polkit helper. See some audit https://bugzilla.suse.com/show_bug.cgi?id=1178848 More serious isseus from it were fixed but there are still unfixed lower priority items. Sending better error reports from helper to the main program is one of them.
> I actually use btrfs in production, seems to work fine for me. snapshots and > incremental backups are nice. You can also have compression / deduplication. No doubt you, and many others do. But in the course of this read too many discussions on various corners of the internet that are way down in the details or people saying their data have been corrupted irreparably. I'm not worried about a bitflip myself, more that I accidentally nuke something. When the SUSE installer wanted to proceed with btrfs thought I would give it a try, but, all those features I get for free with ZFS too and yet I'm yet to have anything this complicated come up. Also I have had troubles mounting the whole drive on a fooreign system, sounds it can't be done easily. With ZFS you just import it and snapshots are just there in a hidden folder. Complexity of building the kernel modules aside it's just worked so I'll stick to what I know.