Fanning Software Consulting

Representing Floats in IDL

QUESTION: OK, I've read the sky is falling article (several times), and I think I understand why some numbers cannot be represented exactly on a computer. And I am even clear that the number 3.3 is one of them. But my question is this: if you assign x=3.3 and you know a priori that the floating point data type will not have enough bits to store this number precisely, why does Print command show this number as 3.3?

For example, assuming we are talking IEEE, we have one bit for the sign, 8 for the exponent, and 23 for the mantissa. So what is 3.3?

ANSWER: I took a quick stab at this answer by replying flippantly:

I presume it is because whatever number is stored, when rounded to the 7-8 significant figures a floating point value can accurately represent, comes out to 3.300000.

But the proper answer was provided by Sven Utcke.

Consider how decimal values (on the left, below) are converted to binary values (on the right, below).

    3 = 11 = 1.1 * 2^1
   0.3 = 0.0100110011001100110011001100110011001100... 

From which we see:

    0.3*2 = _0_.6    0.6*2 = _1_.2
   0.2*2 = _0_.4    0.4*2 = _0_.8    0.8*2 = _1_.6    0.6*2 = ... 

Combining these parts, we get this:

   3.3 = 1.10100110011001100110011 * 2^1 

Or:

   S | Exp + 127 | Mantissa without leading 1
   0 | 1000000   | 1010011 00110011 00110011 

Which, if we recombine it, turns out to be 3.2999999523162841796875. We can actually see this in IDL.

    IDL> print, byte(3.3,0,4)
     51  51  83  64 

Which, if we rewrite it appropriately, turns out to be this.

   01000000 01010011 00110011 00110011 

Which, when recombined differently, is the above number.

Norbert Hahn notes that:

... because we only store 24 binary mantissa digits. The formatting routine in IDL most likely works with double precission and hence appends binary zeroes rather than continuing to repeat the pattern 0011. This results in a decimal number being slightly less than 3.30000000.

The problem arises from the conversion between binary to decimal numbers. It is not a problem of IDL but a problem of working with numbers of finite length.

Version of IDL used to prepare this article: IDL 7.0.1.

Google
 
Web Coyote's Guide to IDL Programming