My processor can only do arithmetic operations on 8-bit or 16-bit unsigned integers.
1) This means the word size for this processor is 16 bits, correct?
Operations on words are O(1).
The reason for this has to do with the actual circuit-level implementation of how the processor works, correct?
If I were to add two words, and the result were a number with more than 16-bits, could I state the following?
1) the processor can add the numbers but would only report the 16 least significant digits.
2) to also report more than 16 bits, the processor must have software that permits these operations with large numbers (numbers that don't fit in a word).
Finally,
Let's say I have a word w which is a 16-bit digit and I want the eight least significant digits. I can do w & 0xFF. What is the time complexity of this operation? Is this O(1) because of circuit level implementation of the processor as well?
O(1)
means it takes constant time no matter what the size of the input is. Not sure if that is a meaningful question here, as the size of the input is fixed to 16 bits and cannot grow anyway (or rather, you don't need to consider anything else than the least significant word to get to the last eight bits). – Dermatoid