For example, when dealing with various amounts of objects, whole numbers (1, 2, 3, etc.) or integers (which include 0 and negative whole numbers) are appropriate, since in most cases the application would not involve fractions of those objects (in particular in the case of people or other living entities).
On the other hand, measurements such as height or weight, or percentages, require the representation of fractions (for example, 3/2 to 70/3), or decimal numbers (for example, 30.2 or 2.11).
In order to handle the different classes of numbers efficiently, computers are designed for a small set of basic data types for numbers, namely "byte", "short", "integer", "long", "float", and "double". Each of these types uses a fixed number of bits (each bit has two states) and can therefore represent a limited number of values.
The type "byte" uses 8 bits and can have 256 different values (2 to the power 8).
The type "short" uses two bytes (16 bits) and can have 65,536 different values (2 to the power 16).
The type "integer" uses four bytes (32 bits) and can have 4,294,967,296 different values (2 to the power 32).
The type "long" uses eight bytes (64 bits) and can have 18,446,744,073,709,551,616 different values (2 to the power 64).
The type "float" uses four bytes (32 bits) like an "integer", but is used to represent numbers of a much wider range of values. This is accomplished by limiting the precision of the values.
The type "double" uses eight bytes (32 bits) like a "long". It has double the precision of "float" numbers and ranges from ridiculously small values (2 to the power (-1047)) to astronomically large values ( 1.7976931348623157 times 10 to the power 308). That's 308 digits before the decimal point.
The data types "byte", "short", "integer", and "long" come in two variations: "signed" and "unsigned". If the range of numbers starts at zero it is called "unsigned". If it starts below zero it is call "signed", in which case half of the range would be below zero, the other half above minus one.