Bug 110240 - memcheck vs floating point numerical analysis
Summary: memcheck vs floating point numerical analysis
Status: REPORTED
Alias: None
Product: valgrind
Classification: Developer tools
Component: memcheck (other bugs)
Version First Reported In: 3.0.0
Platform: Compiled Sources Linux
: NOR wishlist
Target Milestone: ---
Assignee: Julian Seward
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2005-08-05 15:33 UTC by John Reiser
Modified: 2020-04-12 17:55 UTC (History)
1 user (show)

See Also:
Latest Commit:
Version Fixed/Implemented In:
Sentry Crash Report:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description John Reiser 2005-08-05 15:33:40 UTC
Memcheck currently makes two kinds of approximations when checking floating
point code.  The propagation of uninit bits, and the main opcode results, both
assume 64-bit round-to-nearest mode.  See
http://www.valgrind.org/docs/manual/manual-core.html#manual-core.limits . 
[Thank you for the documentation.]  It would be better for memcheck to make only
one of these approximations: propagate the uninit bits through floating point
operations using a replacement strategy [if memcheck must], but give the main
opcode results as specified by the user code.

If the user code has gone to the trouble of changing the precision, rounding
mode, or exception control flags, then probably it was for a very good reason. 
Numerical analysis has decades of published peer-reviewed experience in dealing
with floating point arithmetic.  (IEEE 754 itself is a leading example.)  When
memcheck deviates from use of explicit IEEE 754 features, then the result is low
confidence in memcheck because the main answers are wrong under memcheck.

In theory the uninit bits and the main results could become inconsistent with
each other, but it will matter much less than not getting the right main answer.
 As an easy special case, if memcheck performs a default fixup of a main result
then propagation of uninit bits stops anyway:  there are no uninit bits in NaN,
infinity, or zero.  Moreover, the occurrence of uninit bits with non-default
precision/rounding/flags is expected to be lower than in general because such
cases already have received extra programmer attention, as evidenced by the
choice of non-defaults.

If numerical analysis code already tends to be so clean, then why would the user
run memcheck anway?  In order to take advantage of checking for uninit and
malloc/free errors in input/output, visualization, graphical user interface,
threading, and general system calls.  But please give the right main answers.