The Valgrind documentation notes that it doesn't support 80-bit arithmetic internally. However, it notes that programs that depend on 80-bit arithmetic aren't portable anyway and says, "The impression observed from many FP regression tests is that the accuracy differences aren't significant.". Well, I'm here to report that this limitation actually caused catastrophic failure on a piece of software that I have that is extremely portable. My software strictly follows the ANSI C 1990 standard and was compiled with gcc on Linux. The problem is the disconnect between the Linux/gcc system headers and what valgrind actually implements. In my program, I used LDBL_MAX and count on it not being infinite. This is a portable assumption since any conforming implementation of ANSI C must make LDBL_MAX a finite value. My program works fine on implementations where double and long double are both 64 bits. All it counts on is that LDBL_MAX being the largest representable finite long double value. The problem is that when I compile under gcc it uses an 80-bit value for LDBL_MAX, then I run the program under valgrind and that 80-bit value gets converted to a 64-bit value, and since the exponent is too big it turns into the representation for positive infinity. My program has a loop where it divides the value down to find an exponent, but with infinity instead of a finite value represented, the loop never makes any progress by dividing down and in fact keeps allocating more memory and blows up. Hence my very portable program that should work on any implementation of ANSI C 1990 not only goes into an infinite loop under valgrind, but keeps allocating more and more memory until either it or the whole system crashes. I know the lack of proper long double support is a known issue. The point of my filing this bug report is to give a data point against the impression given in the documentation that this lack of proper 80-bit floating-point support is not likely a serious drawback for programs that are portable.
I reported a similar issue in the Debian BTS in 2018: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=890215 This is a test for x86 extended precision with a subnormal, and 0 is obtained with valgrind. A conversion to double precision would explain the issue. This bug was found due to the tset_ld.c test program in GNU MPFR, where one of the tests on long double is specific to x86 extended precision and enabled only in this case (the detection is done with a configure script), thus the program is portable and works on many different platforms, but fails with Valgrind and only in this case.
Someone changed the title of this bug report I submitted to say "x86 32bit" in the title. Actually, I'm using 64-bit, not 32-bit, x86, and that is where I encountered the problem.
(In reply to Chris Wilson from comment #2) > Someone changed the title of this bug report I submitted to say "x86 32bit" > in the title. Actually, I'm using 64-bit, not 32-bit, x86, and that is > where I encountered the problem. That someone was me. I just wanted to have the bug properly titled. I don't think anybody is working on this (patches welcome of course). But lets keep this bug open to signal there are some people interested in this.
*** Bug 471634 has been marked as a duplicate of this bug. ***
*** Bug 424044 has been marked as a duplicate of this bug. ***
This is a duplicate as well. *** This bug has been marked as a duplicate of bug 197915 ***