What does it mean for a program to be 32 or 64 bit? [closed]

Word size is a major difference, but it’s not the only one. It tends to define the number of bits a CPU is “rated” for, but word size and overall capability are only loosely related. And overall capability is what matters.

On an Intel or AMD CPU, 32-bit vs. 64-bit software really refers to the mode in which the CPU operates when running it. 32-bit mode has fewer/smaller registers and instructions available, but the most important limitation is the amount of memory available. 32-bit software is generally limited to using between 2GB and just under 4GB of memory.

Each byte of memory has a unique address, which is not very different from each house having a unique postal address. A memory address is just a number that a program can use to find a piece of data again once it has saved it in memory, and each byte of memory has to have an address. If an address is 32 bits, then there are 2^32 possible addresses, and that means 2^32 addressable bytes of memory. On today’s Intel/AMD CPUs, the size of a memory address is the same as the size of the registers (although this wasn’t always true).

With 32 bit addresses, 4GB (2^32 bytes) can be addressed by the program, however up to half of that space is reserved by the OS. Into the available memory space must fit program code, data, and often also files being accessed. In today’s PCs, with many gigabytes of RAM, this fails to take advantage of available memory. That is the main reason why 64-bit has become popular. 64-bit CPUs were available and widely used (typically in 32-bit mode) for several years, until memory sizes larger than 2GB became common, at which point 64-bit mode started to offer real-world advantages and it became popular. 64 bits of memory address space provides 16 exabytes of addressable memory (~18 quintillion bytes), which is more than any current software can use, and certainly no PC has anywhere near that much RAM.

The majority of data used in typical applications, even in 64-bit mode, does not need to be 64-bit and so most of it is still stored in 32-bit (or even smaller) formats. The common ASCII and UTF-8 representations of text use 8-bit data formats. If the program needs to move a large block of text from one place to another in memory, it may try to do it 64 bits at a time, but if it needs to interpret the text, it will probably do it 8 bits at a time. Similarly, 32 bits is a common size for integers (maximum range of +/- 2^31, or approximately +/- 2.1 billion). 2.1 billion is enough range for many uses. Graphics data is usually naturally represented pixel by pixel, and each pixel, usually, contains at most 32 bits of data.

There are disadvantages to using 64-bit data needlessly. 64-bit data takes up more space in memory, and more space in the CPU cache (very fast memory used by the CPU for short-term storage). Memory can only transfer data at a maximum rate, and 64-bit data is twice as big. This can reduce performance if used wastefully. And if it’s necessary to support both 32-bit and 64-bit versions of software, using 32-bit values where possible can reduce the differences between the two versions and make development easier (doesn’t always work out that way, though).

Prior to 32-bit, the address and word size were usually different (e.g. 16-bit 8086/88 with 20-bit memory addresses but 16-bit registers, or 8-bit 6502 with 16-bit memory addresses, or even early 32-bit ARM with 26-bit addresses). While no programmer ever turned up their nose at better registers, memory space was usually the real driving force for each advancing generation of technology. This is because most programmers rarely work directly with registers, but do work directly with memory, and memory limitations directly cause unpleasantness for the programmer, and in the 32-bit to 64-bit case, for the user as well.

To sum up, while there are real and important technological differences between the various bit sizes, what 32-bit or 64-bit (or 16-bit or 8-bit) really means is simply a collection of capabilities that tend to be associated with CPUs of a particular technological generation, and/or software that takes advantage of those capabilities. Word length is a part of that, but not the only, or necessarily the most important part.

Source: Have been programmer through all these technological eras.

Leave a Comment

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)