I'm not sure how much detail they want you to give - the number of marks that the question is worth would be some indication. You'll probably know most of the following, but I'll give the background info just in case.
start long, probably useless background info
A computer's memory is made of specialised silicon chips called memory chips, consisting of millions of copies of a particular electrical circuit. This memory circuit is like a light switch, in that it can be in one of two states, 'off' and 'on'. The computer can set the state to off or on and it can find out what the current state is (without changing it). It takes about one hundred millionth of a second to do each of these things.
One of these circuits is enough to remember whether something is true or not, with 'on' representing 'true' and 'off' representing 'false'. We say that such a circuit holds one bit of information. The information can have any interpretation we choose. What can't be changed is the fact that there are only two possible values, one represented by 'off' and the other by 'on'.
A possible interpretation is to let 'off' mean 0, and 'on' mean 1, and in this way the memory circuit can stand for a number (either 0 or 1). By taking several circuits together we can represent larger numbers. With two bits of information we can represent four different possible values: 00, 01, 10 and 11. Each of these could represent a number (i.e. 0, 1, 2 and 3) but they could equally well be red, yellow, blue and green. The computer program that uses these two bits decides what they mean. With three bits we get eight values and can count up to seven: 000, 001, 010, 011, 100, 101, 110, 111 (from 0 to 7).
This particular representation is called the binary encoding of the integers, since the patterns left are just binary (base 2) numbers. In general, using
k bits allows 2^
k different values, which are enough to count from 0 to 2^
k - 1. In a typical modern computer, 32 bits are grouped together in this way to form a
word:
00000010010001001110001011011011
By our formula, 2^32 = 4294967296 different values can be stored in one 32-bit word (of course, only one value can be stored at any one time), and this is enough to count from 0 to 2^32 - 1. However, a different representation which allows negative numbers is more commonly used, and in that representation it is possible to store any integer in the range -2^31 to 2^31 - 1 inclusive. It is not hard to check that, although a different set of numbers is being represented, there are still exactly 2^32 different possible values. Although in principle a computer could store arbitrarily large integers, by allocating as many bits as required, for practical efficiency integers are limited to this range so that they always fit into one word. There's also a smaller grouping, of eight bits into
one byte:
00110101
Four bytes make one word on most computers. Since it contains eight bits, one byte can represent 2^8 = 256 values.
A computer's memory is measured in bytes; a typical memory chip holds 64MB (64 megabytes, or millions of bytes) of memory. Each byte has a number, from 0 to whatever, called its
address, and the computer is able to store and retrieve the value of the byte with a given address for any
N in this range, in about one hundred millionth of a second.
In summary - a computer's memory holds information; the number of bits determines the number of possible values; one byte typically holds one character, and one word typically holds one integer; the computer can assign values to these bytes and words, and retrieve the values stored in them very quickly. Also, each byte has a number: 'byte number 5076' or whatever.
These basic capabilites of the computer appear in high-level languages such as Java, slightly dressed up to make them easier to use. For example, suppose we want to set aside some memory to hold an integer, which represents the number of concert tickets we've sold. In Java we would write the following declaration:
int totalSold;
This is an instruction to the computer to set aside enough memory to hold one integer (one word of memory, as we know). Instead of nominating which word ('the word beginning at byte number 5076', perhaps), for our convenience we let the computer find a word for us that is not currently being used to hold anything else; and we give that word the name totalSold, which is more readable for us than a number. We rely on the computer to understand that whenever we say totalSold we mean the word that it chose for us when we gave the declaration. Names like totalSold, that stand for chunks of memory, are called variables in computer programming.
end long, probably useless background info
Now, to answer your question.
1. To store the data in memory, the computer allocates a set number of bytes (2, 4 or 8 in this case), each consisting of a series of bits. In the case of an integer or long int, the bit sequence represents the binary encoded version of the relevant number (don't use abbrev. in an exam). In the case of a decimal, it's slightly more complicated, but the general idea has to do with scientific notation. Take for example the number 1.234 * 10^23. It is split into two parts - the
mantissa (1.234) and the
exponent part (23) for the power-of-ten multiplier (which means the number multiplied out would have 20 zeroes in it, 23 minus the three decimal places). ((I'm afraid I'm not too sure on the specifics here, but those two parts are somehow encoded into bits, heh.)) Most high-level programming languages also assign a given variable name to such memory blocks.
2. Integer arithmetic is close to but not actually mathematical base-two. The low-order bit is 1, next 2, then 4 and so forth as in pure binary. But signed numbers are represented in twos-complement notation. The highest-order bit is a sign bit which makes the quantity negative, and every negative number can be obtained from the corresponding positive value by inverting all the bits and adding one. This is why integers using 32-bits (4 bytes) have the range -2^31 to 2^31 - 1. That 32nd bit is being used for sign. Similarly, integers using 16-bits (2 bytes) have the range -2^15 to 2^15 - 1 (i.e. -32768 to 32767).
I'm afraid I can't formally prove the range given for the decimal numbers - sorry!!
And I apologise for the overly verbose post.