Version: 1.6 (using KDE 3.3.2, (3.1)) Compiler: gcc version 3.3.5 (Debian 1:3.3.5-6) OS: Linux (i686) release 2.6.10 like i said, Kcalc does not give correct answer, it give answer like : 3.252606517456513e-19 . First i think it was a processor error, but in command line bc does not make this error. So it can be an optimized code error or something like that. The bug work with float number a - (b - c) where a is a float where b -c = a where c is not 0 and c> 0.1788 ex: 1.1 - ( 1.2788 - 0.1788) = bug but 1.1 - ( 1.2787 - 0.1787) = no bug
Appears to be a problem with CVS HEAD too.
bc uses infinite precision math. I think kcalc should too. The patch is an ingenious solution, but nasty at the same time ;) There are various bigint libraries. SadEagle suggests gmp and gsl as the two major ones. We would have to import the library if we go this way. This is really a duplicate of 34765, but more severe than a wish but a bug.
It's too late to move kcalc to GMP or something similar before the release of KDE-3.4. But yes, I have been convinced that this would be the right thing to do. Could you check if my bug-fix is a suitable work-around (it lowers indeed the precision, but with plus and minus, multiplication and division it seems to work o.k. I'm not sure though what happens with longer computations). Thanks Klaus
Marking as dup. *** This bug has been marked as a duplicate of 34765 ***
Reopening.
*** Bug 104700 has been marked as a duplicate of this bug. ***
*** Bug 103256 has been marked as a duplicate of this bug. ***
*** Bug 109029 has been marked as a duplicate of this bug. ***
In response to Thaigo Marcieira's comment in bug 103256 it seems as if he doesn't understand how floating point works either. For simple integer operations one should always get an exact result. I.e. in my example with (16*16)+4 giving a wrong answer in kcalc, in my C code I get the correct result, and looking at the binary representation of the result shows that the result is exact. I.e. 260.00000... If kcalc just used plain old double precision (or even single precision) math I would get the correct result. double precision IEEE floating point represents numbers as a sign bit, 11 bits of exponent, then the fractional part. Given how floating point numbers are handled, pure integer operations should produce no rounding error as long as the precision does not exceed the fractional part, which for double precision is 52 bits. #include <stdio.h> #include <math.h> int main(int argc, char *argv[]) { double a=16.0; double b=16.0; double c=4.0; union { double d; struct { long s:1; long exp:11; long frac1:20; long frac2:32; } d2; unsigned long i[2]; } u; int j; int exp; int s; u.d = (a * b) + c; // u.d = 2.0+2.0; exp = u.d2.exp + 1025; printf("%.16g\n", u.d); printf("Hex: 0x%08x 0x%08x\n", u.i[0], u.i[1]); printf("s=%ld, Exp = 0x%03x, frac1=0x%05x, frac2=0x%08x\n", u.d2.s, u.d2.exp+1025, u.d2.frac1, u.d2.frac2); printf("exp=%d\n", exp); j = (1 << exp) | ((u.d2.frac1 >> (20 - exp))); if (u.d2.s) j = -j; printf("Integer result=%d\n", j); return 0; } /mtest 260 Hex: 0x40704000 0x00000000 s=0, Exp = 0x008, frac1=0x04000, frac2=0x00000000 exp=8 Integer result=260 I.e. 2+2 should always give 4, yet kcalc gives 4.000000000000000888178419700125232338905
No, I do understand. I just don't think we need yet another bug report open for what is the same symptom. It doesn't matter that one calculation is integer-only and the other contains fractional components. What does matter is that the behaviour shown to the user is consistent, to the limits of what is possible with floating point. Beyond that, we'll need an arbitrary-precision library.
One fix that seems to work as far as I can tell is to first check to see if both of the operators is an integer, and if so not implement any rounding support. This can be done by checking if the result of (FMOD(op, 1.0) == 0.0). The result will be true for integer values due to the way integers are encoded in IEEE. If, however, some operation occurs where a decimal point is introduced, then this check will fail and it will fall back to the old method. With my change, 2+2=4 and not 4.000000000000000888178419700125232338905, yet the above example of 1.9-(9.9-8) still returns the expected answer. I added the following patch: --- kcalc_core.cpp 2005-07-20 19:47:37.310002000 -0700 +++ kcalc_core.cpp~ 2005-05-23 05:09:26.000000000 -0700 @@ -210,8 +210,6 @@ static CALCAMNT ExecAdd(CALCAMNT left_op, CALCAMNT right_op) { - if ((FMOD(left_op, 1.0) == 0.0) && (FMOD(right_op, 1.0) == 0.0)) - return left_op + right_op; // printf("ExecAdd\n"); CALCAMNT tmp_result = left_op + right_op; @@ -233,8 +231,6 @@ static CALCAMNT ExecSubtract(CALCAMNT left_op, CALCAMNT right_op) { - if ((FMOD(left_op, 1.0) == 0.0) && (FMOD(right_op, 1.0) == 0.0)) - return left_op - right_op; // printf("ExecSubtract\n"); CALCAMNT tmp_result = left_op - right_op; // When operating with floating point numbers the following Note that this has not been well tested, but should solve the problem of pure integer add, subtract, multiply, and in my limited testing divide giving non-integer results when an integer result is expected. Obviously once a decimal point is introduced then this may break down. I.e. 0.9+0.1+3 gives a non-integer answer due to the rounding code still in effect (note that Xcalc gives the correct result here). Xcalc gives the wrong result for 1.9-(9.9-8) however. I am not an expert by any means on this. I can understand why with IEEE the original problem gives a non-zero result. I am more concerned with pure integer operations in my case.
I added recently arbitrary precision in KCalc for KDE 3.5 (internally KCalc now uses fractions for basic arithmetic operations), so this bug should be fixed. Klaus