Can I assume that a bitwise AND operation is O(1)? If so why?
Asked Answered
A

4

6

My processor can only do arithmetic operations on 8-bit or 16-bit unsigned integers.

1) This means the word size for this processor is 16 bits, correct?

Operations on words are O(1).

The reason for this has to do with the actual circuit-level implementation of how the processor works, correct?

If I were to add two words, and the result were a number with more than 16-bits, could I state the following?

1) the processor can add the numbers but would only report the 16 least significant digits.

2) to also report more than 16 bits, the processor must have software that permits these operations with large numbers (numbers that don't fit in a word).

Finally,

Let's say I have a word w which is a 16-bit digit and I want the eight least significant digits. I can do w & 0xFF. What is the time complexity of this operation? Is this O(1) because of circuit level implementation of the processor as well?

Antares answered 25/4, 2016 at 1:24 Comment(4)
When you add two words you can check the status flags if there was an overflow (at least on most CPU).Dermatoid
O(1) means it takes constant time no matter what the size of the input is. Not sure if that is a meaningful question here, as the size of the input is fixed to 16 bits and cannot grow anyway (or rather, you don't need to consider anything else than the least significant word to get to the last eight bits).Dermatoid
Why does this have the [python] tag?Frowsy
A single bitwise AND operation is O(1). Gave details in an answer below.Kirven
K
2

Short Answer:

Yes, a single bitwise AND would be considered O(1).

A Little More Detail:

Even if you looked at the number of operations on each bit it is still O(1). The actual number of bit operations may vary by the variable type, e.g. 8 bits vs. 16 bits vs. 32 bits vs. 64 bits (even 128 bits or more). The key is no matter what the underlying machine is using, it will still do a constant number of operations to perform it. So even as computers develop over time, a bitwise AND will still be O(1).

Another Example to Help Add Clarification

The following block of code is O(1):

print('Hello World');
print('Hello World');
print('Hello World');

Although we print hello world 3 times, every time we run it, it will take a constant amount of time to run and operate and doesn't take longer if someone feeds a large data set into the program. It'll simply just print 3 things, no matter the input.

In the case of the bitwise AND, we perform a specified number of sub-operations that are always the same number. e.g. 8, 16, 32, etc. for the one operation, but its always the same or constant.

In your example, it sounds like you are trying to show that you have some operations that would not require all of the bits to perform as well. Even if these smaller operations only considered 4 bits of say 8. Your code would always just do a constant number of operations when it hits that code. Its like printing 4 hello world statements instead of 8 hello world statements. Either way, 4 or 8 prints, its still constant.

This is why a single bitwise AND operation is O(1).

Kirven answered 25/4, 2016 at 14:59 Comment(4)
I don't think the answer is accurate about O notation to measure instruction cycles. It's used to measure algorithm complexity. In your version, every single processor instruction is O(1).Rainband
4 or 8, it's still constant? When you double the set, you double the execution time, i.e O(n) not O(1).Beanpole
@JohnDifool, the question is about Big O for bitwise AND operation, which this answer addresses accurately. Machines, at least that I've studied, have constant bit length for their registers. Those register lengths are normally used to produce the length of data structures that store data. Since those registers don't vary in length, they are seen as a constant, as they remain a fixed size or "constant" as the name implies. So I would agree with your statement that single process instructions would be O(1). This type of analysis does not consider some instructions are "heavier" than others.Kirven
@ChrisWohlert, yes, if I did one bit, it would be like doing on print statement. If I did operations against 1/2 a byte it would require 4 prints, or if I did the full byte it would be 8 prints. The key is that the byte stays constant in size. Thus your bitwise AND would not grow beyond 8. So pretend x is what we are looking for, then we know O(1) <= x <= O(8), which means that x is O(1). 8 "steps" is heavier than 1 step, but its still constant. How "heavy" the operation can be, will depend upon the underlying machine architecture, but its still O(1), if you want to analysis with Big O.Kirven
R
3

First about the add. Most CPU have what is called a carry flag in some processor status register that let you detect the addition and subtraction overflowing the bit size of the register. So find out the size of the registers on your specific CPU to determine the data bus bandwidth then find out how you can check the status register flags. Most CPU will have a SUB & ADD with and without carry for that purpose.

Next, about time complexity: You can't assume to use Big O notation for this. You need to find out the cycles it takes for the CPU to carry the operation in absolute time (Frequency * cycles) then you need to take into considerations other things like memory access vs L1 and L2 cache access to figure out the total time an operation will take for that CPU.

Finally, accessing memory from assembly code (as you seem to imply) lets you be much more efficient than with higher order languages like Python. CPU will include instructions that can adjust their memory addressing to fit the size of what you are looking for. C-like languages will also carry such ability in the language, but Python won't. JavaScript doesn't even have integers but I digress...

If your goal is to understand low-level programming, something that will always benefit you to understand the machine better, specially around pointers and debugging, I would encourage you to take a video class on Arduino. You may even enjoy it and start a new hobby.

Rainband answered 25/4, 2016 at 1:34 Comment(0)
O
3

You apparently have some confusion about what O(...) means.

  1. the big-O notation requires a variable; for example sorting using compare and swap an array of n elements is known to be at least O(n*log(n)) and you have n that is the variable: as n increases the sorting time will also increase even faster. When saying that x & 0xFF is O(1) what is the variable you're talking about?

  2. big-O is about abstract algorithms where n can grow to infinity. If n and the computer are limited by an upper bound then any algorithm either doesn't terminate or is bounded by a constant (it doesn't make sense to discuss about the limit of something as n increases toward infinity if n cannot increase past a specific limit).

When talking about low-level hardware operations all operations are O(1). Some are faster and require just one "step" (like clearing a register), some require more steps (like an integer division). Even the division however will take at most n steps where n is a small integer.

When discussing about the performance of different algorithms in a concrete CPU it doesn't make sense to use big-O notation, what you can do is count the number of machine cycles required to complete with an explicit formula, possibly dependent on the input size n, but where n cannot grow to infinity.

This is what Knuth does in TAOCP

PS: unfortunately CPUs today are so complex that cycle counting doesn't work any more in the real world. They can for example split instructions into micro-instruction rescheduled to run in parallel, they support speculative execution with backtracking, branch prediction and other hard to analyze techniques. On top of all there is the caching issue that is of extreme importance today and different but compatible models can have vastly different approaches. The only way to really know how long it takes to execute a piece of code with modern CPUs is just to run and measure it over real data on the specific CPU model and hardware.

Outface answered 25/4, 2016 at 5:58 Comment(2)
I should have been more clear that the perspective for my questions was that of a high-level program. From this perspective, certain operations are so basic that they can be considered O(1). I wanted to confirm if this is indeed correct to say and if so why. A bitwise AND operation on two words is O(1) because the word has a bounded number of bits, even though I imagine you do have to go and compare each of the bits? From a program using the bitwise AND on two numbers that each fit in a word, the operation is O(1), correct?Antares
@evianpring Big-O notation is used to describe how the changing size of a set affects execution time of an operation on that set. It is not about how basic an operation is. An operation could take 4 years and still be O(1), or 1 nanosecond and be O(n).Beanpole
C
2

In order:

  1. No. If your processor can do 8- or 16-bit operations, it could be either an 8- or 16- bit processor. In fact, it's more likely to be 8 bits, since most processors try to handle double-size operations.

  2. Yes, O(1). But not because it's in hardware, rather because it's implemented O(1) in hardware. Also, keep in mind that all O(x) are actually "times a constant." Thus if something is O(16), that's really O(1) times a constant 16.

Finally, if you have a 16-bit word and you want the low bits, and your processor really does support 8 bits operations, you can probably access the low bits with a MOV instruction. Something like:

mov16 ax, (memory)
mov8  (othermemory), al

If that isn't available, and you have to do an AND, then yes, the AND is going to be O(1) because it's almost certainly in hardware that way. Even if not, it's probably O(8) at worst, and that's really an alias for O(1).

Cinderellacindi answered 25/4, 2016 at 1:32 Comment(0)
K
2

Short Answer:

Yes, a single bitwise AND would be considered O(1).

A Little More Detail:

Even if you looked at the number of operations on each bit it is still O(1). The actual number of bit operations may vary by the variable type, e.g. 8 bits vs. 16 bits vs. 32 bits vs. 64 bits (even 128 bits or more). The key is no matter what the underlying machine is using, it will still do a constant number of operations to perform it. So even as computers develop over time, a bitwise AND will still be O(1).

Another Example to Help Add Clarification

The following block of code is O(1):

print('Hello World');
print('Hello World');
print('Hello World');

Although we print hello world 3 times, every time we run it, it will take a constant amount of time to run and operate and doesn't take longer if someone feeds a large data set into the program. It'll simply just print 3 things, no matter the input.

In the case of the bitwise AND, we perform a specified number of sub-operations that are always the same number. e.g. 8, 16, 32, etc. for the one operation, but its always the same or constant.

In your example, it sounds like you are trying to show that you have some operations that would not require all of the bits to perform as well. Even if these smaller operations only considered 4 bits of say 8. Your code would always just do a constant number of operations when it hits that code. Its like printing 4 hello world statements instead of 8 hello world statements. Either way, 4 or 8 prints, its still constant.

This is why a single bitwise AND operation is O(1).

Kirven answered 25/4, 2016 at 14:59 Comment(4)
I don't think the answer is accurate about O notation to measure instruction cycles. It's used to measure algorithm complexity. In your version, every single processor instruction is O(1).Rainband
4 or 8, it's still constant? When you double the set, you double the execution time, i.e O(n) not O(1).Beanpole
@JohnDifool, the question is about Big O for bitwise AND operation, which this answer addresses accurately. Machines, at least that I've studied, have constant bit length for their registers. Those register lengths are normally used to produce the length of data structures that store data. Since those registers don't vary in length, they are seen as a constant, as they remain a fixed size or "constant" as the name implies. So I would agree with your statement that single process instructions would be O(1). This type of analysis does not consider some instructions are "heavier" than others.Kirven
@ChrisWohlert, yes, if I did one bit, it would be like doing on print statement. If I did operations against 1/2 a byte it would require 4 prints, or if I did the full byte it would be 8 prints. The key is that the byte stays constant in size. Thus your bitwise AND would not grow beyond 8. So pretend x is what we are looking for, then we know O(1) <= x <= O(8), which means that x is O(1). 8 "steps" is heavier than 1 step, but its still constant. How "heavy" the operation can be, will depend upon the underlying machine architecture, but its still O(1), if you want to analysis with Big O.Kirven

© 2022 - 2024 — McMap. All rights reserved.