In computing, floating point describes a method of representing an approximation to real numbers in a way that can support a wide range of values. The idea of floating-point representation over intrinsically integer fixed-point numbers, which consist purely of a significand, is the expansion of it with the exponent component to achieve greater range. The numbers are, in general, represented approximately to a fixed number of significant digits (also called the mantissa) and scaled using an exponent. The base in computing is normally 2. The typical number that can be represented exactly is of the form:
Significant digits × base
exponent e.g. 1.23 × 2
3 The term floating point refers to the fact that the decimal point, or, more commonly in computers,
binary point can 'float'; that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component in the internal representation, and floating point can thus be thought of as a computer realization of scientific notation. Computer representations of floating point numbers typically use a form of rounding to significant figures, but with
binary numbers.
Konrad Zuse used the floating point in nearly every computing machine. The
Z1,
Z3 and
Z4 were based on floating-point representation for calculations. The
Z2 used fixed-point arithmetic.
Supplement: Sometimes the leading 1
bit of a normalized significand is not actually stored in the computer datum because it can not have another value different from 1. Acually storing it than therefore be neglected. It is called the "hidden" or "implicit"
bit.
cf.
Wikipedia