| Summary: | Helgrind reports possible data race which isn't | ||
|---|---|---|---|
| Product: | [Developer tools] valgrind | Reporter: | h.teisseire |
| Component: | helgrind | Assignee: | Julian Seward <jseward> |
| Status: | RESOLVED INTENTIONAL | ||
| Severity: | normal | CC: | pjfloyd |
| Priority: | NOR | ||
| Version First Reported In: | 3.18.1 | ||
| Target Milestone: | --- | ||
| Platform: | Ubuntu | ||
| OS: | Linux | ||
| Latest Commit: | Version Fixed/Implemented In: | ||
| Sentry Crash Report: | |||
|
Description
h.teisseire
2023-05-19 02:07:05 UTC
I know it's always a hassle to read source code. Although I think I haven't made it very complicated. It should be quite straight-forward. Oh and I also want to note that I'm learning programmation, so I know some of my code feels like it should have be done differently. For example I'm using pthread_detach when I could have used pthread_join which would have been more appropriate for what I try to do. If I were to re-do the project I would have done it differently, however I really am here only for the Possible data race issue and why it happens in this specific case and if whether it's a "bug" or not. (In reply to h.teisseire from comment #2) > Oh and I also want to note that I'm learning programmation, so I know some > of my code feels like it should have be done differently. For example I'm > using pthread_detach when I could have used pthread_join which would have > been more appropriate for what I try to do. > If I were to re-do the project I would have done it differently, however I > really am here only for the Possible data race issue and why it happens in > this specific case and if whether it's a "bug" or not. See point 6 here https://valgrind.org/docs/manual/hg-manual.html#hg-manual.effective-use Round up all finished threads using pthread_join. Avoid detaching threads: don't create threads in the detached state, and don't call pthread_detach on existing threads. Using pthread_join to round up finished threads provides a clear synchronisation point that both Helgrind and programmers can see. If you don't call pthread_join on a thread, Helgrind has no way to know when it finishes, relative to any significant synchronisation points for other threads in the program. So it assumes that the thread lingers indefinitely and can potentially interfere indefinitely with the memory state of the program. It has every right to assume that -- after all, it might really be the case that, for scheduling reasons, the exiting thread did run very slowly in the last stages of its life. (In reply to Paul Floyd from comment #3) > (In reply to h.teisseire from comment #2) > > Oh and I also want to note that I'm learning programmation, so I know some > > of my code feels like it should have be done differently. For example I'm > > using pthread_detach when I could have used pthread_join which would have > > been more appropriate for what I try to do. > > If I were to re-do the project I would have done it differently, however I > > really am here only for the Possible data race issue and why it happens in > > this specific case and if whether it's a "bug" or not. > > See point 6 here > > https://valgrind.org/docs/manual/hg-manual.html#hg-manual.effective-use > > Round up all finished threads using pthread_join. Avoid detaching threads: > don't create threads in the detached state, and don't call pthread_detach on > existing threads. > > Using pthread_join to round up finished threads provides a clear > synchronisation point that both Helgrind and programmers can see. If you > don't call pthread_join on a thread, Helgrind has no way to know when it > finishes, relative to any significant synchronisation points for other > threads in the program. So it assumes that the thread lingers indefinitely > and can potentially interfere indefinitely with the memory state of the > program. It has every right to assume that -- after all, it might really be > the case that, for scheduling reasons, the exiting thread did run very > slowly in the last stages of its life. Alright I understand, thank you for your quick answer :) |