What are 16, 32 and 64-bit architectures?
Asked Answered
F

7

30

What do 16-bit, 32-bit and 64-bit architectures mean in case of Microprocessors and/or Operating Systems?

In case of Microprocessors, does it mean maximum size of General Purpose Registers or size of Integer or number of Address-lines or number of Data Bus lines or what?

What do we mean by saying "DOS is a 16-bit OS", "Windows in a 32-bit OS", etc...?

Florist answered 29/8, 2010 at 10:40 Comment(0)
L
2

The difference comes down to the bit width of an instruction set passed to a general purpose register for operating on. 16 bits can operate on 2 bytes, 64 on 8 bytes of instruction at a time. You can often increase throughput of a processor by executing more dense instructions per clock cycle.

Laminate answered 27/12, 2012 at 18:27 Comment(1)
Felt this needed a short explanation rather than 7 long, inaccurate ones.Laminate
F
18

My original answer is below, if you want to understand the comments.

New Answer

As you say, there are a variety of measures. Luckily for many CPUs a lot of the measures are the same, so there is no confusion. Let's look at some data (Sorry for image upload, I couldn't see a good way to do a table in markdown). Table data

As you can see, many columns are good candidates. However, I would argue that the size of the general purpose registers (green) is the most commonly understood answer.

When a processor is very varied in size for different registers, it will often be described in more detail, eg the Motorola 68k being described as a 16/32bit chip.

Others have argued it is the instruction bus width (yellow) which also matches in the table. However, in today's world of pipelining I would argue this is a much less relevant measure for most applications than the size of the general purpose registers.


Original answer

Different people can mean different things, because as you say there are several measures. So for example someone talking about memory addressing might mean something different to someone talking about integer arithmetic. However, I'll try and define what i think is the common understanding.

My take is that for a CPU it means "The size of the typical register used for standard operations" or "the size of the data bus" (the two are normally equivalent).

I justify this with the following logic. The Z80 has an 8bit accumulator and an 8 bit databus, while having 16bit memory addressing registers (IX, IY, SP, PC), and a 16bit memory address bus. And the Z80 is called an 8bit microprocessor. This means people must normally mean the main integer arithmetic size, or databus size, not the memory addressing size.

It is not the size of instructions, as the Z80 (again) had 1,2 and 3 byte instructions, though of course the multi-byte were read in multiple reads. In the other direction, the 8086 is a 16bit microprocessor and can read 8 or 16bit instructions. So I would have to disagree with the answers that say it is instruction size.

For Operating systems, I would define it as "the code is compiled to run on a CPU of that size", so a 32bit OS has code compiled to run on a 32 bit CPU (as per the definition above).

Frizzell answered 29/8, 2010 at 10:46 Comment(6)
Can you please elaborate, what do you mean by "Language is sloppy"?Florist
@JMSA I believe Nick is pointing to the fact that the terms 16-bit, 32-bit, and 64-bit are ambiguous. Their meaning changes slightly depending on what you are describing.Macias
It isn't the terms that are ambiguous, it's the actual processor designs. The various widths were all optimized separately and thus only loosely related. The rise of C has "encouraged" the data and address widths to be the same, but it wasn't always that way. The actual bus widths were often completely different from either.Blackboard
The bit count of CPU's is quite accurately described at Wikipedia, it's not as sloppy as you describe it...Brame
Thanks for comments, hopefully the wording is better now.Frizzell
Note that instruction size and instruction word length are not the same. Nor is the instruction word guaranteed to be the same size as a data work (there are a few Harvard architectures out there, especially in the embedded world).Simonsimona
B
7

How many bits a CPU "is", means what it's instruction word length is. On a 32 bit CPU, the word length of such instruction is 32 bit, meaning that this is the width what a CPU can handle as instructions or data, often resulting in a bus line with that width. For a similar reason, registers have the size of the CPU's word length, but you often have larger registers for different purposes.

Take the PDP-8 computer as an example. This was a 12 bit computer. Each instruction was 12 bit long. To handle data of the same width, the accumulator was also 12 bit. But what makes the 12-bit computer a 12 bit machine, was its instruction word length. It had twelve switches on the front panel with which it could be programmed, instruction by instruction.

This is a good example to break out of the 8/16/32 bit focus.

The bit count is also typically the size of the address bus. It therefore usually tells the maximum addressable memory.

There's a good explanation of this at Wikipedia:

In computer architecture, 32-bit integers, memory addresses, or other data units are those that are at most 32 bits (4 octets) wide. Also, 32-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. 32-bit is also a term given to a generation of computers in which 32-bit processors were the norm.

Now let's talk about OS.

With OS-es, this is way less bound to the actual "bitty-ness" of the CPU, it usually reflects how opcodes are assembled (for which word length of the CPU) and how registers are adressed (you can't load a 32 bit value in a 16 bit register) and how memory is adressed. Think of it as the completed, compiled program. It is stored as binary instructions and has therefore to fit into the CPUs word length. Task-wise, it has to be able to address the whole memory, otherwise it couldn't do proper memory management.

But what come's down to it, is whether a program is 32 or 64 bit (an OS is essentially a program here) it how its binary instructions are stored and how registers and memory are addressed. All in all, this applies to all kinds of programs, not just OS-es. That's why you have programs compiled for 32 bit or 64 bit.

Brame answered 29/8, 2010 at 11:23 Comment(6)
Instruction word length is partially internal, sometimes an instruction can be longer than the bus the CPU is connected to program memory (in a von-Neumann design, there is just one address space for both program memory and working memory with stack, etc.), now, especially when using pipelining that instruction can be longer that your bus line. Internally, that opcode has a certain width. Most CPU's use microcode to decode that opcode, this microcode can handle a certain width the instruction can have. That is the instruction word width.Brame
I'm not talking about microcode instructions. A CPU instruction is decoded by the microcode. Now this CPU instruction has a (maximum) length. This length is defined by hardware design of the CPU, and its microcode.Brame
"I always thought the "bits" referred to the bus width." Counter example: the first macs were m68000s (definitely a 32bit chip) but ran on 16bin main buses. It took two cycles to perform a full width fetch or store, but this was invisible to the programmer (abstracted out by the cache architecture) except in terms of sustained memory access speed.Simonsimona
@Marting: Yes, but keep in mind, the opcode can be longer than the bus line's width! It is very likley, that opcode + data take multiple cylces to be read and then decoded.Brame
@Brame Sorry if I'm extremely slow but I still don't get it... afaik a Pentium 4 is a 32-bit processor, but certainly has opcodes longer than 4 bytes. Or do you mean only the maximum opcode size internally, i.e. after it has been decoded? If so, does that maximum decoded size really matter to a programmer at all?Eviscerate
@Eviscerate exactly, opcodes handed to the microcode can be longer than the architecture number. But that opcode is then divided into chunks that the microcode processess. and that chunk-length, if you will, is the length the architecture signifies.Brame
L
2

The difference comes down to the bit width of an instruction set passed to a general purpose register for operating on. 16 bits can operate on 2 bytes, 64 on 8 bytes of instruction at a time. You can often increase throughput of a processor by executing more dense instructions per clock cycle.

Laminate answered 27/12, 2012 at 18:27 Comment(1)
Felt this needed a short explanation rather than 7 long, inaccurate ones.Laminate
B
1

The definitions are marketing terms more than precise technical terms.

In fuzzy technical term they are more related to architecturally visible widths than any real implementation register or bus width. For instance the 68008 was classed as a 32-bit CPU, but had 16-bit registers in the silicon and only an 8-bit data bus and 20-odd address bits.

Blackboard answered 29/8, 2010 at 10:56 Comment(1)
The 6502 was classed as an 8-bit processor, but had 16-bit address registers, a 16-bit address bus, and 8, 16,and 24-bit instructions. The MIPS architecture had option for 64-bit data and 32-bit addresses or 64-bits for both, but the early implementations only had 32-bit busses. etc. Marketing usually preferred the biggest number possible, unless targeting the extremely low cost embedded market.Blackboard
C
1

http://en.wikipedia.org/wiki/64-bit#64-bit_data_models the data models mean bitness for the language.

The "OS is x-bit" phrase usually means that the OS was written for x-bit cpu mode, that is, 64-bit Windows uses long mode on x86-64, where registers are 64 bits and address space is 64-bits large and there are other distinct differences from 32-bits mode, where typically registers are 32-bits wide and address space is 32-bits large. On x86 a major difference between 32 and 64 bits modes is presence of segmentation in 32-bits for historical compatibility.

Usually the OS is written with CPU bitness in mind, x86-64 being a notable example of decades of backwards compatibility - you can have everything from 16-bit real-mode programs through 32-bits protected-mode programs to 64-bits long-mode programs.

Plus there are different ways to virtualise, so your program may run as if in 32-bits mode, but in reality it is executed by a non-x86 core at all.

Canute answered 30/8, 2010 at 22:19 Comment(1)
To add, many architectures have only one bitness and therefore only language data models have meaning when talking about bitness on these architectures. Other architectures, such as ARM, are 32-bits per se, but have additional modes, so-called Thumb/Thumb2 which increase instruction density by encoding some instructions in 16 bits instead of 32. They are still considered 32-bits CPUs and OS they run are usually 32 bits.Canute
S
0

As far as I know, technically, it's the width of the integer pathways. I've heard of 16bit chips that have 32bit addressing. However, in reality, it is the address width. sizeof(void*) is 16bit on a 16bit chip, 32bit on a 32bit, and 64bit on a 64bit.

This leads to problems because C and C++ allow conversions between void* and integral types, and it's safe if the integral type is large enough (the same size as the pointer). This lead to all sorts of unsafe stuff in terms of

void* p = something;
int i = (int)p;

Which will horrifically crash and burn on 64bit code (works on 32bit) because void* is now twice as big as int.

In most languages, you have to work hard to care about the width of the system you're working on.

Sugar answered 29/8, 2010 at 10:44 Comment(2)
"Which will horrifically crash and burn on 64bit code (only works on 16bit) because void* is now twice as big as int." This applies to 64-bit Windows, but not x64-Linux where sizeof(int) == 8.Corbett
The special cases in which terrible code might actually work should be ignored, not posted. Also, fixed 16bit to 32bit.Sugar
S
0

When we talk about 2^n bit architectures in computer science then we are basically talking about memory registers, address buses size or data buses size. The basic concept behind term of 2^n bit architecture is to signify that this here 2^n bit of data can be made use to address/transport the data of size 2^n by processes.

Sapheaded answered 29/8, 2010 at 10:54 Comment(1)
Architectures are not limited to 2^n. 18, 24, and 36 bit architectures were widely used during the mini-computer era.Simonsimona

© 2022 - 2024 — McMap. All rights reserved.