| Summary: | Partition Manager fails to create new partition table if 'ddf_raid_member' signature is present | ||
|---|---|---|---|
| Product: | [Applications] partitionmanager | Reporter: | LaughingMan <lingm+kdebugs> |
| Component: | general | Assignee: | Andrius Štikonas <andrius> |
| Status: | REPORTED --- | ||
| Severity: | normal | ||
| Priority: | NOR | ||
| Version First Reported In: | 24.08.1 | ||
| Target Milestone: | --- | ||
| Platform: | Other | ||
| OS: | Linux | ||
| Latest Commit: | Version Fixed/Implemented In: | ||
| Sentry Crash Report: | |||
| Attachments: | Screenshot of the error in the GUI | ||
|
Description
LaughingMan
2024-10-11 15:46:39 UTC
Hmm, I don't think we want to add --force to sfdisk here since it might protect us against trashing something in other cases. Not sure what's the best way to fix this but I guess I should try to reproduce it... Hmm, I guess what happened in your case is that Linux system noticed that this is raid disk and automatically activated it on /dev/md0. Can you try to reproduce the same problem, then run sudo mdadm --stop /dev/md0 and see if that helps. (That said, it's not yet clear to me what should be the solution.) /dev/md0 doesn't exist. There's md, md126 and md127.
lsblk lists the device as:
> sdf 8:80 0 14,6T 0 disk
>├─md126 9:126 0 0B 0 raid6
>└─md127 9:127 0 0B 0 md
(In reply to LaughingMan from comment #3) > /dev/md0 doesn't exist. There's md, md126 and md127. > > lsblk lists the device as: > > sdf 8:80 0 14,6T 0 disk > >├─md126 9:126 0 0B 0 raid6 > >└─md127 9:127 0 0B 0 md Ok, but the same idea applies, raid got auto activated... So sfdisk noticed that device is used and refused to work on it. So, should I run > sudo mdadm --stop /dev/md126 or > sudo mdadm --stop /dev/md127 ? I'm a little out of my depth here. (In reply to LaughingMan from comment #5) > So, should I run > > sudo mdadm --stop /dev/md126 > or > > sudo mdadm --stop /dev/md127 > ? > > I'm a little out of my depth here. Probably both, though I'm not an expert on raid. But at least lsblk suggest that both are somehow derived from /dev/sdf Ok: - Stopped md126. - Tried creating a partition table -> Failed - Stopped md127 - Tried creating a partition table -> Success Maybe Partition Manager should detect this case and run that stop command on behalf of the user? Possibly after another confirmation dialogue. Something like "The disk you're trying to modify is currently mounted as a RAID. Unmount now? [ ] Yes [ ] No" (In reply to LaughingMan from comment #7) > Ok: > - Stopped md126. > - Tried creating a partition table -> Failed > - Stopped md127 > - Tried creating a partition table -> Success > > Maybe Partition Manager should detect this case and run that stop command on > behalf of the user? Possibly after another confirmation dialogue. > Something like "The disk you're trying to modify is currently mounted as a > RAID. Unmount now? [ ] Yes [ ] No" Perhaps, though it's not clear how to implement. Anyway, for now I'll leave this open, I think we've gathered enough data to root cause it. So you can put your disk to production. (In reply to Andrius Štikonas from comment #8) > (In reply to LaughingMan from comment #7) > > Ok: > > - Stopped md126. > > - Tried creating a partition table -> Failed > > - Stopped md127 > > - Tried creating a partition table -> Success > > > > Maybe Partition Manager should detect this case and run that stop command on > > behalf of the user? Possibly after another confirmation dialogue. > > Something like "The disk you're trying to modify is currently mounted as a > > RAID. Unmount now? [ ] Yes [ ] No" > > Perhaps, though it's not clear how to implement. > > Anyway, for now I'll leave this open, I think we've gathered enough data to > root cause it. So you can put your disk to production. There is actually an old branch of kpmcore (raid-support) that does have some knowledge of mdadm (but it was never merged to master). Perhaps I'll see if there is anything there that helps with this. (In reply to Andrius Štikonas from comment #8) > I think we've gathered enough data to root cause it. So you can put your disk to production. Cool. Although in case that wasn't clear: My testing was necessarily destructive. The creation of the new partition table only fails after hitting "Apply" and confirming. Either it fails or the disk gets modified. Since my test earlier succeeded, the raid stuff is already gone. |