Sunday, February 17, 2008

Feb 18 2008

MAIN POINTS
The text emphasizes that our ideal notion of numbers cannot occur inside computers, because they must be represented within a finite number of bits. It then fleshes out the IEEE standard for representing floating point numbers in computers. This is made up of a certain number of mantissa and exponent (characteristic) bits, and a sign bit. This length varies on which standardized length. The mantissa is normalized to maximize precision (bits are shifted as far left as possible) and the exponent is calculated by subtracting from a large number. Underflow and overflow are also discussed, and the fact that two zeroes are possible. Rounding and chopping (truncating) are discussed. Rounding has the disadvantage that many digits might have to be changed in other fields. Relative and absolute errors are defined again, the same way they were in earlier readings. The notion of significant digits is defined.

CHALLENGES
I've never seen significant digits defined this way— I only used them in chem, bio and physics. I've recently covered the IEEE floating point form in COMP240, but I could see how this brief introduction could confuse people.

REFLECTION
It would be interesting to see how we would propagate errors, as truncated numbers are used in further calculations. Also, what happens in computers when we want to minimize errors, and the IEEE form isn't good enough? Could the user define a special data type using many more bits? Or create a special array?

No comments: