Summary: | Filelight does wrongly calculate because of hard links | ||
---|---|---|---|
Product: | [Applications] filelight | Reporter: | branch8475 <branch8475> |
Component: | general | Assignee: | Unassigned bugs <unassigned-bugs-null> |
Status: | CONFIRMED --- | ||
Severity: | normal | CC: | kyle.devir, martin.sandsmark, sitter |
Priority: | NOR | ||
Version First Reported In: | 24.12.2 | ||
Target Milestone: | --- | ||
Platform: | Other | ||
OS: | Linux | ||
Latest Commit: | Version Fixed In: | ||
Sentry Crash Report: |
Description
branch8475
2025-03-05 18:21:10 UTC
The code that is meant to deduplicate hardlinks is not correctly implemented. Currently it resides on the PosixWalker as a std::set, skipping nodes that are already in the set. The walkers are run in any number of threads though, so that set is never complete. Needs some thinking about where to best track this. A global cache across all walkers would work, but needs mutexing so maybe passing the dev-inode out of the walker is more efficient (we have a global accumulation lock anyway) but then we kinda break abstraction a bit. *** Bug 507502 has been marked as a duplicate of this bug. *** |