Bug 422877

Summary: SMART status is not working on NVMe drives
Product: [Applications] partitionmanager Reporter: soredake <katyaberezyaka>
Component: generalAssignee: Andrius Štikonas <andrius>
Status: CONFIRMED ---    
Severity: normal CC: akozlovskiy119, hurricanepootis, nate, righn, sitter, zawertun
Priority: NOR    
Version: 4.1.0   
Target Milestone: ---   
Platform: Other   
OS: Linux   
URL: https://github.com/storaged-project/udisks/pull/975
See Also: https://bugs.kde.org/show_bug.cgi?id=428564
Latest Commit: Version Fixed In:
Sentry Crash Report:
Attachments: problem
log
working log
smartctl JSON for NVMe drive

Description soredake 2020-06-12 14:29:18 UTC
Created attachment 129262 [details]
problem

SUMMARY

SMART status is not working.

STEPS TO REPRODUCE
1. Install kubuntu on pc with nvme
2. Try to view smart status for this drive.

OBSERVED RESULT

SMART status is not working.

EXPECTED RESULT

SMART status is working.

SOFTWARE/OS VERSIONS
Windows: 
macOS: 
Linux/KDE Plasma: 20.04
(available in About System)
KDE Plasma Version: 5.18.4
KDE Frameworks Version: 5.68.0
Qt Version: 5.12.8

ADDITIONAL INFORMATION
Comment 1 soredake 2020-06-12 14:29:33 UTC
Created attachment 129263 [details]
log
Comment 2 Andrius Štikonas 2020-06-12 14:33:55 UTC
can you run

smartctl --json --all /dev/nvme0n1

and see if it works
Comment 3 soredake 2020-06-12 14:44:18 UTC
Created attachment 129264 [details]
working log

Yes, it works.
Comment 4 Andrius Štikonas 2020-10-03 15:16:58 UTC
I've now bought NVMe too for my aarch64 board and can reproduce this.
Comment 5 Andrius Štikonas 2020-10-05 22:28:50 UTC
I looked a bit more at the code, but this is probably quite non-trivial.

smartctl outputs SMART information for NVMe disks in a different way. And the current code for reading smart attributes is already quite messy (I think parts were imported from libatasmart). It's probably best to simplify existing code first before bolting on even more stuff and making it even less maintainable.
Comment 6 joseteluisete 2020-12-29 20:08:26 UTC
Same problem here.
When I try to open KDE Partition Manager, after typing the root password the application crashes.
Sometimes the answer is ...

Device found: Samsung SSD 970 EVO Plus 500GB”
Segmentation fault (core dumped)

Sometimes the answer is ...

"Scanning devices..."
"Device found: loop0"
smartctl initialization failed for  "/dev/loop0" :  No such file or directory
error during smart output parsing for  "/dev/loop0" :  No such file or directory
unknown file system type  ""  on  "/dev/loop0"
Segmentation fault (core dumped)


If I run ...
smartctl --json --all /dev/nvme0n1

the result is ...
{
  "json_format_version": [
    1,
    0
  ],
  "smartctl": {
    "version": [
      7,
      1
    ],
    "svn_revision": "5022",
    "platform_info": "x86_64-linux-5.9.11-3-MANJARO",
    "build_info": "(local build)",
    "argv": [
      "smartctl",
      "--json",
      "--all",
      "/dev/nvme0n1"
    ],
    "messages": [
      {
        "string": "Smartctl open device: /dev/nvme0n1 failed: Permission denied",
        "severity": "error"
      }
    ],
    "exit_status": 2
  }
}

My system: Manjaro - Plasma 5.20.4 - Frameworks 5.76.0 - Qt 5.15.2 - Kernel 5.9.11-3

I hope this information is useful.
Comment 7 Andrius Štikonas 2020-12-29 20:12:15 UTC
(In reply to joseteluisete from comment #6)
> Same problem here.
> When I try to open KDE Partition Manager, after typing the root password the
> application crashes.
> Sometimes the answer is ...
> 
> Device found: Samsung SSD 970 EVO Plus 500GB”
> Segmentation fault (core dumped)
> 
> Sometimes the answer is ...
> 
> "Scanning devices..."
> "Device found: loop0"
> smartctl initialization failed for  "/dev/loop0" :  No such file or directory
> error during smart output parsing for  "/dev/loop0" :  No such file or
> directory
> unknown file system type  ""  on  "/dev/loop0"
> Segmentation fault (core dumped)
> 
> 
> If I run ...
> smartctl --json --all /dev/nvme0n1
> 
> the result is ...
> {
>   "json_format_version": [
>     1,
>     0
>   ],
>   "smartctl": {
>     "version": [
>       7,
>       1
>     ],
>     "svn_revision": "5022",
>     "platform_info": "x86_64-linux-5.9.11-3-MANJARO",
>     "build_info": "(local build)",
>     "argv": [
>       "smartctl",
>       "--json",
>       "--all",
>       "/dev/nvme0n1"
>     ],
>     "messages": [
>       {
>         "string": "Smartctl open device: /dev/nvme0n1 failed: Permission
> denied",
>         "severity": "error"
>       }
>     ],
>     "exit_status": 2
>   }
> }
> 
> My system: Manjaro - Plasma 5.20.4 - Frameworks 5.76.0 - Qt 5.15.2 - Kernel
> 5.9.11-3
> 
> I hope this information is useful.

If it's crashing that might be slightly different problem than original report.

But can you try to obtain gdb backtrace with debug symbols.
Comment 8 joseteluisete 2020-12-29 20:25:13 UTC
"Obtain gdb backtrace with debug symbols" ...
I'm sorry, I have no idea what that means.
I've searched for it and found nothing I can understand.
I am a newbie and my English level is far from perfect.
I'd appreciate some useful links where I can read what I must do to help.
Comment 9 Andrius Štikonas 2020-12-29 21:00:51 UTC
(In reply to joseteluisete from comment #8)
> "Obtain gdb backtrace with debug symbols" ...
> I'm sorry, I have no idea what that means.
> I've searched for it and found nothing I can understand.
> I am a newbie and my English level is far from perfect.
> I'd appreciate some useful links where I can read what I must do to help.

Which distro are you using? I can try to find some more specific instructions...
There is also quite a bit info here: https://community.kde.org/Guidelines_and_HOWTOs/Debugging/How_to_create_useful_crash_reports

backtrace is just crash "log" showing which part of code crashed and
"Segmentation fault (core dumped)" is telling us that crash happened.
Comment 10 joseteluisete 2021-01-04 18:53:04 UTC
I installed gdb.
I run ...

   gdb partitionmanager
   (gdb) run

It returned ...

   Starting program: /usr/bin/partitionmanager 
   [Thread debugging using libthread_db enabled]
   Using host libthread_db library "/usr/lib/libthread_db.so.1".
   [New Thread 0x7ffff2102640 (LWP 34995)]
   [New Thread 0x7fffebf3c640 (LWP 35002)]
   Loaded backend plugin:  "pmsfdiskbackendplugin"
   [New Thread 0x7fffe8e3d640 (LWP 35003)]
   [New Thread 0x7fffd7fff640 (LWP 35004)]
   [New Thread 0x7fffde5a0640 (LWP 35005)]
   [New Thread 0x7fffddd9f640 (LWP 35006)]
   [New Thread 0x7fffdd0d4640 (LWP 35007)]
   [New Thread 0x7fffdc8d3640 (LWP 35008)]
   [New Thread 0x7fffd77fe640 (LWP 35009)]
   [New Thread 0x7fffd6ffd640 (LWP 35010)]
   "Using backend plugin: pmsfdiskbackendplugin (1)"
   "Scanning devices..."
   [New Thread 0x7fffd4a56640 (LWP 35094)]
   "Device found: loop0"
   smartctl initialization failed for  "/dev/loop0" :  No such file or directory
   error during smart output parsing for  "/dev/loop0" :  No such file or directory
   unknown file system type  ""  on  "/dev/loop0"

   Thread 12 "m_DeviceScanner" received signal SIGSEGV, Segmentation fault.
   [Switching to Thread 0x7fffd4a56640 (LWP 35094)]
   0x00007ffff7efca53 in ?? () from /usr/lib/libkpmcore.so.10


I don't know if this is important but I recently updated the system and the process showed this ...

   /usr/bin/grub-probe: warning: unknown device type nvme0n1

I hope this is useful.
Is there anything else I can do to give you more information?
Greetings!
Comment 11 Andrius Štikonas 2021-01-04 19:00:03 UTC
(In reply to joseteluisete from comment #10)
> I installed gdb.
> I run ...
> 
>    gdb partitionmanager
>    (gdb) run
> 
> It returned ...
> 
>    Starting program: /usr/bin/partitionmanager 
>    [Thread debugging using libthread_db enabled]
>    Using host libthread_db library "/usr/lib/libthread_db.so.1".
>    [New Thread 0x7ffff2102640 (LWP 34995)]
>    [New Thread 0x7fffebf3c640 (LWP 35002)]
>    Loaded backend plugin:  "pmsfdiskbackendplugin"
>    [New Thread 0x7fffe8e3d640 (LWP 35003)]
>    [New Thread 0x7fffd7fff640 (LWP 35004)]
>    [New Thread 0x7fffde5a0640 (LWP 35005)]
>    [New Thread 0x7fffddd9f640 (LWP 35006)]
>    [New Thread 0x7fffdd0d4640 (LWP 35007)]
>    [New Thread 0x7fffdc8d3640 (LWP 35008)]
>    [New Thread 0x7fffd77fe640 (LWP 35009)]
>    [New Thread 0x7fffd6ffd640 (LWP 35010)]
>    "Using backend plugin: pmsfdiskbackendplugin (1)"
>    "Scanning devices..."
>    [New Thread 0x7fffd4a56640 (LWP 35094)]
>    "Device found: loop0"
>    smartctl initialization failed for  "/dev/loop0" :  No such file or
> directory
>    error during smart output parsing for  "/dev/loop0" :  No such file or
> directory
>    unknown file system type  ""  on  "/dev/loop0"
> 
>    Thread 12 "m_DeviceScanner" received signal SIGSEGV, Segmentation fault.
>    [Switching to Thread 0x7fffd4a56640 (LWP 35094)]
>    0x00007ffff7efca53 in ?? () from /usr/lib/libkpmcore.so.10
> 
> 
> I don't know if this is important but I recently updated the system and the
> process showed this ...
> 
>    /usr/bin/grub-probe: warning: unknown device type nvme0n1
> 
> I hope this is useful.
> Is there anything else I can do to give you more information?
> Greetings!

Can you run
thread apply all bt

inside gdb after crash happens.

We'll probably need to install some packages that contain debug symbols but let's start with thread apply all bt
Comment 12 joseteluisete 2021-01-04 19:28:05 UTC
thread apply all bt
returns ...

Thread 12 (Thread 0x7fffcca56640 (LWP 8759) "m_DeviceScanner"):
#0  0x00007ffff7efca53 in ?? () from /usr/lib/libkpmcore.so.10
#1  0x00007ffff7efd782 in readFstabEntries(QString const&) () from /usr/lib/libkpmcore.so.10
#2  0x00007ffff7efe1ec in possibleMountPoints(QString const&, QString const&) () from /usr/lib/libkpmcore.so.10
#3  0x00007ffff7ee04df in FileSystem::detectMountPoint(FileSystem*, QString const&) () from /usr/lib/libkpmcore.so.10
#4  0x00007ffff007290f in ?? () from /usr/lib/qt/plugins/libpmsfdiskbackendplugin.so
#5  0x00007ffff0072c09 in ?? () from /usr/lib/qt/plugins/libpmsfdiskbackendplugin.so
#6  0x00007ffff00750b5 in ?? () from /usr/lib/qt/plugins/libpmsfdiskbackendplugin.so
#7  0x00007ffff007361f in ?? () from /usr/lib/qt/plugins/libpmsfdiskbackendplugin.so
#8  0x00007ffff7efb163 in DeviceScanner::scan() () from /usr/lib/libkpmcore.so.10
#9  0x00007ffff66eff0f in ?? () from /usr/lib/libQt5Core.so.5
#10 0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#11 0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 11 (Thread 0x7fffceffd640 (LWP 8683) "Thread (pooled)"):
#0  0x00007ffff5a209c8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007ffff66f6058 in QWaitCondition::wait(QMutex*, QDeadlineTimer) () from /usr/lib/libQt5Core.so.5
#2  0x00007ffff66f3504 in ?? () from /usr/lib/libQt5Core.so.5
#3  0x00007ffff66eff0f in ?? () from /usr/lib/libQt5Core.so.5
#4  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#5  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 10 (Thread 0x7fffcf7fe640 (LWP 8682) "Thread (pooled)"):
#0  0x00007ffff5a209c8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007ffff66f6058 in QWaitCondition::wait(QMutex*, QDeadlineTimer) () from /usr/lib/libQt5Core.so.5
#2  0x00007ffff66f3504 in ?? () from /usr/lib/libQt5Core.so.5
#3  0x00007ffff66eff0f in ?? () from /usr/lib/libQt5Core.so.5
#4  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#5  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 9 (Thread 0x7fffcffff640 (LWP 8681) "Thread (pooled)"):
#0  0x00007ffff5a209c8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007ffff66f6058 in QWaitCondition::wait(QMutex*, QDeadlineTimer) () from /usr/lib/libQt5Core.so.5
#2  0x00007ffff66f3504 in ?? () from /usr/lib/libQt5Core.so.5
#3  0x00007ffff66eff0f in ?? () from /usr/lib/libQt5Core.so.5
#4  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#5  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 8 (Thread 0x7fffdc8d3640 (LWP 8680) "Thread (pooled)"):
#0  0x00007ffff5a209c8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007ffff66f6058 in QWaitCondition::wait(QMutex*, QDeadlineTimer) () from /usr/lib/libQt5Core.so.5
#2  0x00007ffff66f3504 in ?? () from /usr/lib/libQt5Core.so.5
#3  0x00007ffff66eff0f in ?? () from /usr/lib/libQt5Core.so.5
#4  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
--Type <RET> for more, q to quit, c to continue without paging--
#5  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 7 (Thread 0x7fffdd59e640 (LWP 8679) "partiti:disk$3"):
#0  0x00007ffff5a206a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007fffe974abdc in ?? () from /usr/lib/dri/iris_dri.so
#2  0x00007fffe97493a8 in ?? () from /usr/lib/dri/iris_dri.so
#3  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#4  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 6 (Thread 0x7fffddd9f640 (LWP 8678) "partiti:disk$2"):
#0  0x00007ffff5a206a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007fffe974abdc in ?? () from /usr/lib/dri/iris_dri.so
#2  0x00007fffe97493a8 in ?? () from /usr/lib/dri/iris_dri.so
#3  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#4  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 5 (Thread 0x7fffde5a0640 (LWP 8677) "partiti:disk$1"):
#0  0x00007ffff5a206a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007fffe974abdc in ?? () from /usr/lib/dri/iris_dri.so
#2  0x00007fffe97493a8 in ?? () from /usr/lib/dri/iris_dri.so
#3  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#4  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 4 (Thread 0x7fffe8e3d640 (LWP 8676) "partiti:disk$0"):
#0  0x00007ffff5a206a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1  0x00007fffe974abdc in ?? () from /usr/lib/dri/iris_dri.so
#2  0x00007fffe97493a8 in ?? () from /usr/lib/dri/iris_dri.so
#3  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#4  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 3 (Thread 0x7fffebf3c640 (LWP 8675) "QDBusConnection"):
#0  0x00007ffff637146f in poll () from /usr/lib/libc.so.6
#1  0x00007ffff50ac93f in ?? () from /usr/lib/libglib-2.0.so.0
#2  0x00007ffff50572b1 in g_main_context_iteration () from /usr/lib/libglib-2.0.so.0
#3  0x00007ffff69306e1 in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQt5Core.so.5
#4  0x00007ffff68d63fc in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQt5Core.so.5
#5  0x00007ffff66eed22 in QThread::exec() () from /usr/lib/libQt5Core.so.5
#6  0x00007ffff61ba098 in ?? () from /usr/lib/libQt5DBus.so.5
#7  0x00007ffff66eff0f in ?? () from /usr/lib/libQt5Core.so.5
#8  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#9  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 2 (Thread 0x7ffff2102640 (LWP 8674) "QXcbEventQueue"):
#0  0x00007ffff637146f in poll () from /usr/lib/libc.so.6
#1  0x00007ffff49fb63b in ?? () from /usr/lib/libxcb.so.1
--Type <RET> for more, q to quit, c to continue without paging--
#2  0x00007ffff49fd37b in xcb_wait_for_event () from /usr/lib/libxcb.so.1
#3  0x00007ffff220f131 in ?? () from /usr/lib/libQt5XcbQpa.so.5
#4  0x00007ffff66eff0f in ?? () from /usr/lib/libQt5Core.so.5
#5  0x00007ffff5a1a3e9 in start_thread () from /usr/lib/libpthread.so.0
#6  0x00007ffff637c293 in clone () from /usr/lib/libc.so.6

Thread 1 (Thread 0x7ffff2884cc0 (LWP 8670) "partitionmanage"):
#0  0x00007ffff637146f in poll () from /usr/lib/libc.so.6
#1  0x00007ffff50ac93f in ?? () from /usr/lib/libglib-2.0.so.0
#2  0x00007ffff50572b1 in g_main_context_iteration () from /usr/lib/libglib-2.0.so.0
#3  0x00007ffff69306e1 in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQt5Core.so.5
#4  0x00007ffff68d63fc in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQt5Core.so.5
#5  0x00007ffff68de894 in QCoreApplication::exec() () from /usr/lib/libQt5Core.so.5
#6  0x000055555557e858 in main ()
Comment 13 Andrius Štikonas 2021-01-04 19:35:50 UTC
Aa... That suggests that in your case something goes wrong with fstab.

See the following line in backtrace

#1  0x00007ffff7efd782 in readFstabEntries(QString const&) () from /usr/lib/libkpmcore.so.10

I don't know where exactly it crashes in that function (you would need to install kpmcore package with debug symbols from your package manager).

But that looks to be different bug than original bug report.


Anything non-standard in your fstab? Could be duplicate of https://bugs.kde.org/show_bug.cgi?id=429191 and https://bugs.kde.org/show_bug.cgi?id=430475
Comment 14 joseteluisete 2021-01-04 19:50:35 UTC
(In reply to Andrius Štikonas from comment #13)
> Aa... That suggests that in your case something goes wrong with fstab.
> 
> See the following line in backtrace
> 
> #1  0x00007ffff7efd782 in readFstabEntries(QString const&) () from
> /usr/lib/libkpmcore.so.10
> 
> I don't know where exactly it crashes in that function (you would need to
> install kpmcore package with debug symbols from your package manager).
> 
> But that looks to be different bug than original bug report.
> 
> 
> Anything non-standard in your fstab? Could be duplicate of
> https://bugs.kde.org/show_bug.cgi?id=429191 and
> https://bugs.kde.org/show_bug.cgi?id=430475

Yes, I have something unusual installed in my fstab:
It's mhddfs.
I use it to create a location with all the videos I have on 3 huge HDDs.
I've use it for 3 years at least and I have never had any problem.
KDE Partition Manager never crashed before.

UUID=BE78-6D97                            /boot/efi      vfat    umask=0077 0 2
UUID=5e31802a-970e-4d5e-83d7-114511d2b96c swap           swap    defaults,noatime,discard 0 0
UUID=683125e2-1176-4786-a706-ca0a8836841d /              ext4    defaults,noatime 0 1
UUID=cd65db9f-63d6-4160-9994-e812f40077dc /adds          ext4    defaults,noatime 0 2
UUID=f2ef4ef4-1603-4e2c-9833-e6a74c7c4dcd /apps          ext4    defaults,noatime 0 2
UUID=bbb9996a-6e05-430e-9221-48881bd95ddf /htpc          ext4    defaults,noatime,discard 0 2
UUID=04170d4f-491e-42be-aec1-f86fe19b7b09 /hdd1          ext4    defaults,noatime 0 2
UUID=d61c8d60-2d43-4034-b61c-ed695c1361db /hdd2          ext4    defaults,noatime 0 2
UUID=da887262-1011-44ce-8a80-d096e0c4281f /hdd3          ext4    defaults,noatime 0 2
mhddfs#/hdd1,/hdd2,/hdd3                  /htpc/video    fuse    defaults,allow_other 0 0

I'm going to delete it and try again.
Comment 15 joseteluisete 2021-01-04 19:54:48 UTC
That's it !!!!
I've deleted the line with mhddfs in the fstab and now KDE Partition Manager works again.
Comment 16 joseteluisete 2021-01-04 20:12:55 UTC
Do I report it as a new bug?
I've read about the SSHFS bug and I don't know if this it's the same or not.
Comment 17 Andrius Štikonas 2021-01-04 23:34:23 UTC
(In reply to joseteluisete from comment #16)
> Do I report it as a new bug?
> I've read about the SSHFS bug and I don't know if this it's the same or not.

Don't open a new bug but maybe you can paste an offending fstab line in the already existing report.
Comment 18 Andrius Štikonas 2021-01-04 23:35:48 UTC
(In reply to joseteluisete from comment #16)
> Do I report it as a new bug?
> I've read about the SSHFS bug and I don't know if this it's the same or not.

In any case, I can see that it's the same bug.

fstab parsing code that I write assumes that anything after # is a comment.
Comment 19 Nate Graham 2021-03-09 15:06:18 UTC
FWIW, KInfoCenter now has a SMART status page which does work for my NVMe disk. Perhaps the two could share code so that both representations have the same information?
Comment 20 Harald Sitter 2021-03-09 15:23:56 UTC
(In reply to Nate Graham from comment #19)
> FWIW, KInfoCenter now has a SMART status page which does work for my NVMe
> disk. Perhaps the two could share code so that both representations have the
> same information?

https://bugs.kde.org/show_bug.cgi?id=428564
Comment 21 Andrius Štikonas 2021-03-09 15:43:04 UTC
(In reply to Harald Sitter from comment #20)
> (In reply to Nate Graham from comment #19)
> > FWIW, KInfoCenter now has a SMART status page which does work for my NVMe
> > disk. Perhaps the two could share code so that both representations have the
> > same information?
> 
> https://bugs.kde.org/show_bug.cgi?id=428564

That will help with code sharing although, it doesn't solve the problem of SMART NVMe not working in kpmcore since at the moment it only works in  kinfocenter.
Comment 22 Harald Sitter 2021-03-09 16:33:51 UTC
Sure. But plasma-disks doesn't implement the view that is broken in partition manager, that is what the other bug laments even ;)

It only looks at the generic "SMART Status: Good" field, which is also working in PM, it doesn't even model the health data or smart table in detail.

i.e. There is nothing plasma-disks has to offer here. Well, that's not true... if you want test data: here are a bunch of samples https://invent.kde.org/plasma/plasma-disks/-/tree/master/autotests/fixtures
Comment 24 Andrius Štikonas 2022-06-08 15:21:51 UTC
Nate, partitionmanager does not use udisks for getting SMART status (it uses smartctl). So that PR in udisks wouldn't help.
Comment 25 Nate Graham 2022-06-08 15:22:50 UTC
Ok, sorry.
Comment 26 Yaroslav Sidlovsky 2023-07-02 14:23:39 UTC
FYI: kpmcore relies on "ata_smart_attributes" JSON object from smartctl output.
It's absent for NVME drives.
Comment 27 Yaroslav Sidlovsky 2023-07-02 14:24:30 UTC
Created attachment 160047 [details]
smartctl JSON for NVMe drive
Comment 28 Yaroslav Sidlovsky 2023-07-02 14:25:48 UTC
Forgot to mention corresponding line in KPMCore:
https://invent.kde.org/system/kpmcore/-/blob/3d2c3960117d049ec73a6e6b52600a2a883e16c9/src/core/smartparser.cpp#L144
Comment 29 hurricanepootis 2023-10-28 21:01:50 UTC
Any progress on this?
Comment 30 Andrius Štikonas 2023-10-28 21:36:59 UTC
(In reply to hurricanepootis from comment #29)
> Any progress on this?

No, no progress yet. Perhaps once I replace my laptop and have NVMe in it... But all this SMART code is somewhat ugly compared to other parts of KDE Partition Manager and personally I'm less familiar with it...