Difference between O(n) and O(log(n)) - which is better and what exactly is O(log(n))?
Asked Answered
D

7

92

This is my first course in data structures and every lecture / TA lecture , we talk about O(log(n)) . This is probably a dumb question but I'd appreciate if someone can explain to me exactly what does it mean !?

Dispirited answered 29/4, 2012 at 3:57 Comment(1)
A possible repetition of #487758Contravene
K
125

It means that the thing in question (usually running time) scales in a manner that is consistent with the logarithm of its input size.

Big-O notation doesn't mean an exact equation, but rather a bound. For instance, the output of the following functions is all O(n):

f(x) = 3x
g(x) = 0.5x
m(x) = x + 5

Because as you increase x, their outputs all increase linearly - if there's a 6:1 ratio between f(n) and g(n), there will also be approximately a 6:1 ratio between f(10*n) and g(10*n) and so on.


As for whether O(n) or O(log n) is better, consider: if n = 1000, then log n = 3 (for log-base-10). Which would you rather have your algorithm take to run: 1000 seconds, or 3 seconds?

Kinky answered 29/4, 2012 at 4:1 Comment(6)
Nicely explained. Also, I would like to add some information about what a logarithm even is for those that don't know. log n in computer science means, the exponent I would need to raise the number 2 to to get n. So imagine, if n = 16. Our exponent would be much much smaller than the actual n value. It would be 4. Hope this makes sense. In the example above by Amber, she is giving a similar example but raising 10 to the power of 3.Minaret
+1 - The clearest explanation possible in the smallest number of words. Thank you.Morez
Also worth noting that Big-O generally refers to any bound, not necessarily the tightest bound (i.e., the smallest g(x) such that f(x) = O(g(x))). f(x), g(x), m(x) are also O(n^2). But in the context of performance analysis, we want the tightest bound (i.e., the smallest function that bounds a given algorithm's performance curve) to give us a "worst case" idea of an algorithm's performance.Custody
@Horse Voice In your example you use 2 ** 4, while in Amber's code the example is 10 ** 3; how to determine parameters?Thanh
This link shows a graph with all the different complexities: bigocheatsheet.comJoashus
Not contradicting this answer because it does answer very well. However, better" should be defined in this case. "Faster" sure but not in all cases. Let's say if you have one where it's N/100 and the other is (log N)*100, the latter would be slower for small input sizes. But of course, there will inevitably be a point where O(log n) is faster than O(n).Clinkerbuilt
A
57

For the short answer, O(log n) is better than O(n)

Now what exactly is O( log n) ?

Generally, when referring to big O notation, log n refers to the base-2 logarithm, (same way ln represents base e logarithms). This base-2 logarithm is the inverse of an exponential function. An exponential function grows very rapidly and we can intuitively deduce that it's inverse will do the exact opposite i.e grows very slowly.

For example

x = O(log n)

We can represent n as ,

n= 2x

And

210 = 1024 → lg(1024) = 10

220 = 1,048,576 → lg(1048576) = 20

230 = 1,073,741,824 → lg(1073741824) = 30

Large increments in n only lead to a very small increase in log(n)

For a complexity of O(n) on the other hand, we get a linear relationship

A factor of log2n should be taken over A factor of n anytime.

To further solidify this, I came across an example in Algorithms Unlocked By Thomas Cormen

Consider 2 computers : A and B

Both Computers have a task of searching an array for a value Let's assume the arrays have 10 million elements to be searched through

Computer A- This computer can execute 1 billion instructions per second and is expected to perform the above task using an algorithm with a complexity of O(n). We can approximate the time is takes this computer to complete the task as

n/(instructions p second) → 107/10^9 = 0.01 seconds

Computer B- This computer is much more slower, and can execute only 10 million instructions per second. Computer B is expected to perform the above task using an algorithm with a complexity of O(log n). We can approximate the time is takes this computer to complete the task as

log(n) /(instructions p second) → log(107)/107 = 0.000002325349

With this illustration, we can see that even though computer A is much better than computer B,due to the algorithm used by B, it completes the task much quicker.

I think it should be very clear now why O(log(n)) is much faster than O(n)

Aftertaste answered 11/2, 2019 at 10:58 Comment(0)
B
24

For the input of size n, an algorithm of O(n) will perform steps perportional to n, while another algorithm of O(log(n)) will perform steps roughly log(n).

Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better. Since it will be much faster.

Biodynamics answered 29/4, 2012 at 4:8 Comment(0)
H
8

O(logn) means that the algorithm's maximum running time is proportional to the logarithm of the input size. O(n) means that the algorithm's maximum running time is proportional to the input size.

basically, O(something) is an upper bound on the algorithm's number of instructions (atomic ones). therefore, O(logn) is tighter than O(n) and is also better in terms of algorithms analysis. But all the algorithms that are O(logn) are also O(n), but not backwards...

Harken answered 29/4, 2012 at 4:8 Comment(1)
"O(n) is tighter than O(logn) and is also better in terms of algorithms analysis"...clearly O(log(n)) is better than O(n). I think you meant the other way.Sciurine
H
5

http://en.wikipedia.org/wiki/Big_oh

O(log n) is better.

Healy answered 29/4, 2012 at 4:4 Comment(0)
T
4

Formal definition:

g(x) = O(f(x)) <=> there is x0 and constant C that for every x > x0, |g(x)| <= C|f(x)|.

Thus, if you find algorithm A for problem P that its complexity O(f(n)), you can say that the number of steps your algorithm will do, is lower or equal asymptotically to f(n), when n is usually the input size. (n can be anything)

For further reading:http://en.wikipedia.org/wiki/Big_O_notation.

Terryl answered 5/7, 2012 at 15:13 Comment(0)
P
0

Is O(1) always Faster than O(log n)?

O(1) means the running time of an algorithm is independent of the input size and is bounded by a constant 'c'. Whereas, O(log n) means when input size 'n' increases exponentially, our running time will increase linearly.

Note that it might happen that O(log n) is faster than O(1) in some cases but O(1) will outperform O(log n) when n grows as O(1) is independent of the input size n. Considering these two code snippets,

Code 1:

function show(){

    for(let i = 2; i <= 5; i++){
      console.log("Hello");
    
      }
  }



Code 2:

function showN(n){

    for(let i = 2; i <= n; i=i*2){
      console.log("Hello");
     }
  }
  

The Running time of Code 1 is O(1) as it's independent of the input size 'n' whereas the running time of Code 2 is O(log n).

Case: where O(log n) outperforms O(1) Let us assume hypothetically that function show takes 1ms to execute.

So for n=2, Code 1 will take 4 ms to execute whereas Code 2 will take just 1 ms to execute. In this case, O(log n) outperformed O(1).

Case: where O(1) outperforms O(log n) As we increase the input size 'n', O(1) will outperforms O(log n). Let's see an example, suppose n = 2048, now Code 1 will take 4 ms as it took previously but Code 2 will take 11 ms to execute. In this case, O(1) outperformed O(log n).

Conclusion: As we noticed in the above cases, O(1) algorithms will not always run faster than O(log n). Sometimes, O(log n) will outperform O(1) but as the input size 'n' increases, O(log n) will take more time than the execution of O(1).

Pterous answered 29/8, 2023 at 8:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.