What does O(log n) mean exactly?
Asked Answered
G

32

2734

I am learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials.

For example, the following function is O(n) because the algorithm grows in proportion to its input n:

f(int n) {
  int i;
  for (i = 0; i < n; ++i)
    printf("%d", i);
}

Similarly, if there was a nested loop, the time would be O(n2).

But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)?

I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

Gingersnap answered 21/2, 2010 at 20:5 Comment(17)
A 1-node binary tree has height log2(1)+1 = 1, a 2-node tree has height log2(2)+1 = 2, a 4-node tree has height log2(4)+1 = 3, and so on. An n-node tree has height log2(n)+1, so adding nodes to the tree causes its average height to grow logarithmically.Apologetics
One thing I'm seeing in most answers is that they essentially describe "O(something)" means the running time of the algorithm grows in proportion to "something". Given that you asked for "exact meaning" of "O(log n)", it's not true. That's the intuitive description of Big-Theta notation, not Big-O. O(log n) intuitively means the running time grows at most proportional to "log n": #471699Peipeiffer
Related #487758Directly
I always remember divide and conquer as the example for O(log n)Whit
Trickiness about your function: It's linear with respect to the VALUE of the input, however - if we think of n as a vector of bits (how we represent the input) - it's actually O(2^n) in terms of the SIZE of the input (ie. it takes lg(n) bits to represent n, so if n is k bits long, the loop iterates up to 2^k times). So I would say that loop is exponential, not linear.Endocrine
It's important to realize that its log base 2 (not base 10). This is because at each step in an algorithm, you remove half of your remaining choices. In computer science we almost always deal with log base 2 because we can ignore constants. However there are some exceptions (i.e. Quad Tree run times are log base 4)Toucan
@Ethan: It doesn't matter which base you are in, since base conversion is just a constant multiplication, The formula is log_b(x) = log_d(x) / log_d(b). Log_d(b) will just be a constant.Endocrine
Few basic information about logarithms! 1.Demystifying the Natural Logarithm 2.Using Logarithms in the Real WorldCatechol
@mdkess Yes I know it is a constant factor, I even said so in my comment. I was pointing out that logarithmic run times often involve dividing your choices in half as opposed to a log base 10 run time which would involve removing 9/10ths of your choices at each step. So I was pointing out to the OP that we often find ourselves dividing our choices in half at each step of some algorithm, which results in a log base 2 run time not log base 10. Realizing that is useful to being able to look at code and say "Oh that's O(logn)". I was not really saying anything about actual run time.Toucan
@Ethan: You're still incorrect. If something is O(log_b(n)), b > 1, it is O(log_c(n)) for any c > 1. So dividing your choices in half results in O(log_2(n)) time AND O(log_10(n)) time AND O(log_12838(n)) runtime. It doesn't matter. Logarithm base conversion is a constant multiplication, which is dominated by the logarithm function - ie. O(c*f(n)) is O(f(n)) if c is a constant.Endocrine
@mdkess I am not going for algorithmic rigor I am just trying to create a mental mapping between something in code and something being O(logn). Of course by now any attempt at that being happening has been destroyed. Great. And for the record I am not wrong because I said the run time was log base 10, I did not refer to big O notation. So if you want to be all formal then it would be T(log base 10 (n)). Now if you would like to stop arguing semantics that would be great.Toucan
@Ethan: Assuming by T you mean big-Theta, then you're right - but it's also T(log base 43 (n)) and T(log base 378 (n)), by definition.Endocrine
@mdkess I do not mean big Theta. T_n is the function that is the exact running time of code. For example: T_n = 5 + 8n + n^2. Big O of T_n is O(n^2)Toucan
Can someone tell where the O stands for? Output?Sipes
@Sipes programmers.stackexchange.com/a/107977/8613Gingersnap
I recently watched a video which describes Log N perfectly. It is an absolute must watch. youtube.com/watch?v=kjDR1NBB9MUThymelaeaceous
Maybe it would be useful svitlanamoiseyenko.com/2016/08/30/olog-n-and-how-fast-it-isKilocycle
C
3355

I cannot understand how to identify a function with a log time.

The most common attributes of logarithmic running-time function are that:

  • the choice of the next element on which to perform some action is one of several possibilities, and
  • only one will need to be chosen.

or

  • the elements on which the action is performed are digits of n

This is why, for example, looking up people in a phone book is O(log n). You don't need to check every person in the phone book to find the right one; instead, you can simply divide-and-conquer by looking based on where their name is alphabetically, and in every section you only need to explore a subset of each section before you eventually find someone's phone number.

Of course, a bigger phone book will still take you a longer time, but it won't grow as quickly as the proportional increase in the additional size.


We can expand the phone book example to compare other kinds of operations and their running time. We will assume our phone book has businesses (the "Yellow Pages") which have unique names and people (the "White Pages") which may not have unique names. A phone number is assigned to at most one person or business. We will also assume that it takes constant time to flip to a specific page.

Here are the running times of some operations we might perform on the phone book, from fastest to slowest:

  • O(1) (in the worst case): Given the page that a business's name is on and the business name, find the phone number.

  • O(1) (in the average case): Given the page that a person's name is on and their name, find the phone number.

  • O(log n): Given a person's name, find the phone number by picking a random point about halfway through the part of the book you haven't searched yet, then checking to see whether the person's name is at that point. Then repeat the process about halfway through the part of the book where the person's name lies. (This is a binary search for a person's name.)

  • O(n): Find all people whose phone numbers contain the digit "5".

  • O(n): Given a phone number, find the person or business with that number.

  • O(n log n): There was a mix-up at the printer's office, and our phone book had all its pages inserted in a random order. Fix the ordering so that it's correct by looking at the first name on each page and then putting that page in the appropriate spot in a new, empty phone book.

For the below examples, we're now at the printer's office. Phone books are waiting to be mailed to each resident or business, and there's a sticker on each phone book identifying where it should be mailed to. Every person or business gets one phone book.

  • O(n log n): We want to personalize the phone book, so we're going to find each person or business's name in their designated copy, then circle their name in the book and write a short thank-you note for their patronage.

  • O(n2): A mistake occurred at the office, and every entry in each of the phone books has an extra "0" at the end of the phone number. Take some white-out and remove each zero.

  • O(n · n!): We're ready to load the phonebooks onto the shipping dock. Unfortunately, the robot that was supposed to load the books has gone haywire: it's putting the books onto the truck in a random order! Even worse, it loads all the books onto the truck, then checks to see if they're in the right order, and if not, it unloads them and starts over. (This is the dreaded bogo sort.)

  • O(nn): You fix the robot so that it's loading things correctly. The next day, one of your co-workers plays a prank on you and wires the loading dock robot to the automated printing systems. Every time the robot goes to load an original book, the factory printer makes a duplicate run of all the phonebooks! Fortunately, the robot's bug-detection systems are sophisticated enough that the robot doesn't try printing even more copies when it encounters a duplicate book for loading, but it still has to load every original and duplicate book that's been printed.

Cataclysmic answered 21/2, 2010 at 20:14 Comment(37)
@David Thornley » Ah, gotcha. I thought I'd made that obvious by saying "each resident or business" was getting a phonebook, but I'll edit to highlight this.Cataclysmic
@cletus: Coincidental, I'm afraid. I picked it because phonebooks have a large N, people understand what they are and what they do, and because it's versatile as an example. Plus I got to use robots in my explanation! A win all-around. (Also, it looks like your answer was made before I was even a member on StackOverflow to begin with!)Cataclysmic
@John: no worries. Was just curious. :)Directly
"A mistake occurred at the office, and every entry in each of the phone books has an extra "0" at the end of the phone number. Take some white-out and remove each zero." <-- this is not order N squared. N is defined as the size of the input. The size of the input is the number of phone numbers, which is the number of numbers per book times the number of books. That's still a linear time operation.Alexandraalexandre
@Billy: In this example, N is the number of people in a single book. Because every person in the phone book also gets their own copy of the book, there are N identical phone books, each with N people in it, which is O(N^2).Cataclysmic
@John Feminella: N, with regards to Big Oh, is always the size of the input set. If you define N to be something else, that's fine, but you did not do so above....Alexandraalexandre
The bogo sort is the most stupid algorithm I have ever seen in my life. Basically while (!isInOrder(deck)) { shuffle(deck); }Mosley
It's a stupid algorithm, but it's not the stupidest algorithm you could possibly write. :)Cataclysmic
@JohnFeminella I see where you’re going with the shuffled-pages example, but the description of the fix sounds like insertion sort rather than, say, mergesort or quicksort.Rockabilly
Actually, the algorithm people use for looking up names in phone books isn't binary search. It's interpolation search, which is O(log log n) on average!Pinsk
Isn't O(1) the best case, rather than worst case as it is strangely highlighted as?Unveil
Isn't "take some white-out and remove each zero" O(n·m), where n is the number of entries, and m is the number of phonebooks printed?Voronezh
@BenHodgson You are generally correct. However, in this case, since the list of customers receiving copies of the phonebook is defined as every person or business in the phonebook, n==m.Ferret
@Unveil I believe the reason it was highlighted as worst case is that there is a naïve constant time process to find a given entry on a page that involves scanning every entry in order until the target entry is found. If there are 50 entries on a page, the worst case is 50 operations, which is O(1). You could potentially do better, i.e. if the target is 5th on the page. In this case, the time to do "better" is variable and unknown, since there are 49 "better" cases, but the worst case is always a constant number of operations.Ferret
@KenB I thought O(1) meant constant time, i.e. the amount of operations is regardless of input. 50 entries taking 50 operations would be O(n). en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functionsUnveil
@Unveil O(n) implies variability. 50 operations is a fixed number if you are only considering the worst case, since the worst case is always exactly 50 entries. Honestly, it's getting into semantics a little.Ferret
Great illustration of big O notation! One comment on the O(1) case: You have a disclaimer saying that flipping to a given page number is constant time, however I think it's more like log(n). A better example for O(1) would be to pick a random phone number by flipping open a random page in the phone book and choosing a random number on that page. This has constant running time, no matter how many entries there are in the phone book, and no disclaimers required.Larisa
Quick question for the case: "There was a mix-up at the printer's office, and our phone book had all its pages inserted in a random order. Fix the ordering so that it's correct by looking at the first name on each page and then putting that page in the appropriate spot in a new, empty phone book." Shouldn't the complexity it should be O(n!) rather than O(n log n)Desirous
@Bagira: No; think of a deck of cards. There are n! ways to shuffle the cards, but it doesn't take n! time to sort them.Cataclysmic
Sorry, but i could understand with cards example as well. My concern is, with Random ordering, you can't decide the subset which you need to visit. Since items are not ordered, in worst case you can move in wrong direction and will have to traverse other subset as well and every time size of list is reduced by one that's why i said n!. If you remove the word "Random" from sentence then it makes sense to me. ThanksDesirous
It is confusing that you say, this list is in order from "best to worst" but then start the list with "worst case"Relive
@Adam: "O(1) worst case" means that the worst it could possibly be is O(1), which is the best of all possible worlds. If you said "the worst test grade in the class was 100%", that is a very good thing, not a very bad thing!Cataclysmic
what will be the time complexity of this function log(n!)Thompson
@Thompson O(log n!) is equivalent to O(n log n).Cataclysmic
How incredible is it that this answer has been accepted to the original question? This answer does not explain in my opinion what O(log n) time really is: you only give a lot of examples (also to things that have almost nothing to do with the question) without explaining exactly (always) why the complexity of those operations are really correct. Incredible!Luehrmann
@iAteABug_And_iLiked_it I have provided a very simple answer for understanding O(log n)Sergio
Why does removing the last digit of each number have a complexity of O(n2)? I assumed it would be O(n).Engadine
Guys, I found solution for this Link is better to the question asked: #9153390Pitta
The O(n*n!) algorithm is actually unbounded - randomised bogosort is unbounded (deterministic bogosort is not) - there's no guarantee that the robot will ever finish.Orchestral
The O(n^n) algorithm should more specifically be, I think, O(n^2*2^n) (because it's n*n+2n*n+4n*n+...+n*n*2^n, if I'm understanding it correctly, that is: we're printing n books of size n at the first step, and then reprinting the originals and the reprints). It could also use a brief explanation.Orchestral
I'll just leave comment to note that binary search is not a divide and conquer algorithm.Chroma
A Guy named John came in and banged Everything whatever comes in front of explanation , you rock Thank you.Banff
what does "the elements on which the action is performed are digits of n" meanBedeck
O(n · n!): We're ready to load the ... It seems that needs to be O(n*(n-1)!)Panther
@Bedeck - check out moonshadow's answer in this post and you'll understand it. Basically it means that if you have a number e.g. 12000 you won't preform an operation 12,000 times, but rather 5 times (because it has 5 digits). Incrementing the number will also increase the complexity but only proportionally, so if you'll increase the number to 112,000 the complexity will grow only by 1 (because now you have 6 digits), not by the actual increment which is 100,000).Venose
Is looking for a name in phonebook log to the base 26(n)????? since there are 26 choices at each point.Neilson
a better example for N^2 would be that you are assigned to write everyone's phone number inside in a new phone book. you write down the numbers as 9 digit numbers, as your leaving. Your boss tells you he wants a 1- in front of each number in case there is international expansion in the future. You will have to go through every page to create these 10 digit numbers.Toomer
M
982

O(log N) basically means time goes up linearly while the n goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on.

​It is O(log n) when we do divide and conquer type of algorithms e.g binary search. Another example is quick sort where each time we divide the array into two parts and each time it takes O(N) time to find a pivot element. Hence it N O(log N)

Mcafee answered 21/2, 2010 at 20:18 Comment(10)
Three lines of wisdom that beats all other essay answers... :) Just in case somebody is missing it, in programming context, the base of log is 2 (not 10), so O(log n) scales like 1 sec for 10 elements, 2 sec for 20, 3 for 40 etc.Aeolipile
Agreed, concise and clear, although the ending question from the OP was how to identify a logarithmic function, not quite "what is it"Relive
yes, logarithmic function is it's inverse to exponential function. ((log x) base a) is inverse of (a power x). Qualitative analysis of these functions with graphs would give more intuition.Chamberlain
This took me about 3 read-throughs to realize it wasn't wrong. Time goes up linearly, while the number of elements is exponential. This means more elements during less time. This is mentally taxing for those that visualize log as the familiar log curve on a graph.Eventual
I think this is a very good answer, except for the part where it claims that binary search is a divide and conquer algorithm. It isn't.Chroma
@ray Why is it not binary search is a divide and conquer algorithm?Canty
@Canty The link in my prev. comment goes to this answer, where I explain why it isn't. If you have further questions, please leave comment(s) under my answer within that particular post.Chroma
@Aeolipile That's not quite right. When you use the notation O(log n), the base of the log is irrelevant because all log functions are proportional to each other. That is why you omit the base, which is not the case with exponentials (e.g. O(2^n) is not the same as O(10^n)). So even in the context of base-2, the scaling behaviour you described is not generally correct - it depends on the constant factor.Keilakeily
@Mcafee when you said divide and conquer are we assuming that x no of different thread will process the chunks?Cyclostome
Quick Sort's worst-case is O(n^2), while best-Case is Ω(nlogn).Servia
D
659

Many good answers have already been posted to this question, but I believe we really are missing an important one - namely, the illustrated answer.

What does it mean to say that the height of a complete binary tree is O(log n)?

The following drawing depicts a binary tree. Notice how each level contains double the number of nodes compared to the level above (hence binary):

Binary tree

Binary search is an example with complexity O(log n). Let's say that the nodes in the bottom level of the tree in figure 1 represents items in some sorted collection. Binary search is a divide-and-conquer algorithm, and the drawing shows how we will need (at most) 4 comparisons to find the record we are searching for in this 16 item dataset.

Assume we had instead a dataset with 32 elements. Continue the drawing above to find that we will now need 5 comparisons to find what we are searching for, as the tree has only grown one level deeper when we multiplied the amount of data. As a result, the complexity of the algorithm can be described as a logarithmic order.

Plotting log(n) on a plain piece of paper, will result in a graph where the rise of the curve decelerates as n increases:

O(log n)

Disintegration answered 21/2, 2010 at 22:22 Comment(7)
"Notice how each level contains the double number of nodes compared to the level above (hence binary)" This is incorrect. What you're describing is a balanced binary tree. A binary tree just means each node has at most two children.Christal
In fact, it's a very special balanced binary tree, called a complete binary tree. I've edited the answer but need someone to approve it.Lindstrom
A complete binary tree doesn't need to have the last level to be completely filled. I would say, a 'full binary tree' is more appropriate.Pitarys
Your answer tries to respond more concretely to the original problem of the OP, so it is better than the current accepted answer (IMO), but it is still very incomplete: you just give a half example and 2 images...Luehrmann
This tree has 31 items in it, not 16. Why is it called a 16 item data set? Every node on it represents a number, otherwise it would be an inefficient binary tree :PBotulin
I remember working with b-trees in my college programming class as a means to implement the bubble/quick sort. I was blown away by the brilliance and elegance of the solution. I have yet to find a faster sort algorithm,Priester
"Binary search is a divide-and-conquer algorithm.." That's not so simple. I would argue it is not true. But look here and see what you think for yourself: #8850947Waken
S
352

Overview

Others have given good diagram examples, such as the tree diagrams. I did not see any simple code examples. So in addition to my explanation, I'll provide some algorithms with simple print statements to illustrate the complexity of different algorithm categories.

First, you'll want to have a general idea of Logarithm, which you can get from https://en.wikipedia.org/wiki/Logarithm . Natural science use e and the natural log. Engineering disciples will use log_10 (log base 10) and computer scientists will use log_2 (log base 2) a lot, since computers are binary based. Sometimes you'll see abbreviations of natural log as ln(), engineers normally leave the _10 off and just use log() and log_2 is abbreviated as lg(). All of the types of logarithms grow in a similar fashion, that is why they share the same category of log(n).

When you look at the code examples below, I recommend looking at O(1), then O(n), then O(n^2). After you are good with those, then look at the others. I've included clean examples as well as variations to demonstrate how subtle changes can still result in the same categorization.

You can think of O(1), O(n), O(logn), etc as classes or categories of growth. Some categories will take more time to do than others. These categories help give us a way of ordering the algorithm performance. Some grown faster as the input n grows. The following table demonstrates said growth numerically. In the table below think of log(n) as the ceiling of log_2.

enter image description here

Simple Code Examples Of Various Big O Categories:

O(1) - Constant Time Examples:

  • Algorithm 1:

Algorithm 1 prints hello once and it doesn't depend on n, so it will always run in constant time, so it is O(1).

print "hello";
  • Algorithm 2:

Algorithm 2 prints hello 3 times, however it does not depend on an input size. Even as n grows, this algorithm will always only print hello 3 times. That being said 3, is a constant, so this algorithm is also O(1).

print "hello";
print "hello";
print "hello";

O(log(n)) - Logarithmic Examples:

  • Algorithm 3 - This acts like "log_2"

Algorithm 3 demonstrates an algorithm that runs in log_2(n). Notice the post operation of the for loop multiples the current value of i by 2, so i goes from 1 to 2 to 4 to 8 to 16 to 32 ...

for(int i = 1; i <= n; i = i * 2)
  print "hello";
  • Algorithm 4 - This acts like "log_3"

Algorithm 4 demonstrates log_3. Notice i goes from 1 to 3 to 9 to 27...

for(int i = 1; i <= n; i = i * 3)
  print "hello";
  • Algorithm 5 - This acts like "log_1.02"

Algorithm 5 is important, as it helps show that as long as the number is greater than 1 and the result is repeatedly multiplied against itself, that you are looking at a logarithmic algorithm.

for(double i = 1; i < n; i = i * 1.02)
  print "hello";

O(n) - Linear Time Examples:

  • Algorithm 6

This algorithm is simple, which prints hello n times.

for(int i = 0; i < n; i++)
  print "hello";
  • Algorithm 7

This algorithm shows a variation, where it will print hello n/2 times. n/2 = 1/2 * n. We ignore the 1/2 constant and see that this algorithm is O(n).

for(int i = 0; i < n; i = i + 2)
  print "hello";

O(n*log(n)) - nlog(n) Examples:

  • Algorithm 8

Think of this as a combination of O(log(n)) and O(n). The nesting of the for loops help us obtain the O(n*log(n))

for(int i = 0; i < n; i++)
  for(int j = 1; j < n; j = j * 2)
    print "hello";
  • Algorithm 9

Algorithm 9 is like algorithm 8, but each of the loops has allowed variations, which still result in the final result being O(n*log(n))

for(int i = 0; i < n; i = i + 2)
  for(int j = 1; j < n; j = j * 3)
    print "hello";

O(n^2) - n squared Examples:

  • Algorithm 10

O(n^2) is obtained easily by nesting standard for loops.

for(int i = 0; i < n; i++)
  for(int j = 0; j < n; j++)
    print "hello";
  • Algorithm 11

Like algorithm 10, but with some variations.

for(int i = 0; i < n; i++)
  for(int j = 0; j < n; j = j + 2)
    print "hello";

O(n^3) - n cubed Examples:

  • Algorithm 12

This is like algorithm 10, but with 3 loops instead of 2.

for(int i = 0; i < n; i++)
  for(int j = 0; j < n; j++)
    for(int k = 0; k < n; k++)
      print "hello";
  • Algorithm 13

Like algorithm 12, but with some variations that still yield O(n^3).

for(int i = 0; i < n; i++)
  for(int j = 0; j < n + 5; j = j + 2)
    for(int k = 0; k < n; k = k + 3)
      print "hello";

Summary

The above give several straight forward examples, and variations to help demonstrate what subtle changes can be introduced that really don't change the analysis. Hopefully it gives you enough insight.

Sweetmeat answered 26/4, 2016 at 22:50 Comment(12)
Awesome. The best explanation for me I ever seen. It'd be nicer if O(n^2) is noted as a combination of O(n) and O(n), so O(n) * O(n) = O(n * n) = O(n^2). It feels like a bit of jumping without this equation. This is repetition of prior explanation, but I think this repetition can provide more confidence to readers for understanding.Sanhedrin
This is simply the best ever explanation.Scarper
Amazing examples! Can I ask why constants are disregarded in big O notations? Eg. why O(n/2) is simply taken as O(n)? Is it because when n --> infinity, the constant becomes insignificant as well?Goosefish
@Goosefish - The "O" in big O notation stands for "Order of Magnitude". In other words, at what rate does the time increase as the number of items increases. Halving n doesn't change the rate at which n is increasing. hthDneprodzerzhinsk
@IceTea, to give you insight/intuition to your question. If you chart out n versus n/2 you'll see that they both make a straight line. This puts them in the same class as they have similar growth rates (think of it as the shape of the charting). Similarly, if you charted out log_2 versus log_3 you'll see that they both take on "similar shapes" or "similar growth rates".Sweetmeat
Thanks for the feedback, but at least based on what I've learnt so far, the correct answer to my previous question is that big O time complexities represent asymptotic time complexities, so is indeed that when n --> infinity, the constant becomes insignificant.Goosefish
@IceTea, Explanation given by @Shai and @James is more accurate, n/2 or 2n or n+2 or n will have different-different line in the graph but they will have same growth rate which means all of them will follow a linear growth.Ephesian
New here. Can anybody explain to me as why Algorithm 3 is different than Algorithm 7? They all run less than n times.Pencil
How about the case where we have two nested loops, but the second iterator depends on the first one, does this dependency affect the time complexity?Permeable
@Permeable can you provide an example? e.g. is the following like what you are referring too: for(i=0; i<n; i++){ for(j=i; j<n; j++) { print "foo";}} if so, then yes, in this case example it would be the equivalent of a summation from 1 to n prints (think of the triangle shape it makes if you print each value line by line), which equals (n*(n+1))/2 which is O(n^2).Sweetmeat
@SuShiS, Alg_3 is increasing by multiplication where Alg_7 is increasing by addition. For small numbers this may not be as obvious, but look at it when you have a value that is larger (say a septillion, 10^24) it starts to help things become more clear. Alg_7 is only half the value of n, where Alg_3 would be more of the log of n. The larger the value for n you pick, the clearer it becomes. :)Sweetmeat
This answer has been the most useful for me, could you explain why the variations like for(int j = 0; j < n; j = j + 2) are still the same as whatever they follow? In the example I used why is this still O(n^2) if the loop counter increment means there will be less iterations?Deutsch
S
298

The explanation below is using the case of a fully balanced binary tree to help you understand how we get logarithmic time complexity.

Binary tree is a case where a problem of size n is divided into sub-problem of size n/2 until we reach a problem of size 1:

height of a binary tree

And that's how you get O(log n) which is the amount of work that needs to be done on the above tree to reach a solution.

A common algorithm with O(log n) time complexity is Binary Search whose recursive relation is T(n/2) + O(1) i.e. at every subsequent level of the tree you divide problem into half and do constant amount of additional work.

Sergio answered 26/10, 2012 at 19:33 Comment(2)
newbie here. So could you say the tree height is the division rate by recursion to reach size n=1?Mika
@Cody, yes for the most part your observation is accurate. This example illustrates/utilizes log_2. Your observation would expend beyond log_2 and would be accurate for any log_x where x > 1. Doing straight division may not lead to 1 exactly though, so you may want to say the recursive division until the Ceiling() of the latest division is equal to 1, or something similar.Sweetmeat
R
184

If you had a function that takes:

1 millisecond to complete if you have 2 elements.
2 milliseconds to complete if you have 4 elements.
3 milliseconds to complete if you have 8 elements.
4 milliseconds to complete if you have 16 elements.
...
n milliseconds to complete if you have 2^n elements.

Then it takes log2(n) time. The Big O notation, loosely speaking, means that the relationship only needs to be true for large n, and that constant factors and smaller terms can be ignored.

Rubenstein answered 21/2, 2010 at 20:11 Comment(4)
is log2(n) the same as o(log n) ?Tony
Yes, see comment by nawfal for another answer here: (copy-pasting) - in programming context, the base of log is 2 (not 10), so O(log n) scales like 1 sec for 10 elements, 2 sec for 20, 3 for 40 etcPretoria
@SvenvandenBoogaart, the example in this solution illustrates log_2, which is in the class O(log(n)). There are many others in the same class of O(log(n)) i.e. log_x where x > 1Sweetmeat
@Andrejs, your comment so O(log n) scales like 1 sec for 10 elements, 2 sec for 20, 3 for 40 etc is inaccurate. That pattern/class would match/align with O(n) not O(log(n)). If someone was interested in log_10 then an equivalent example would be 1 sec for 10 elements, 2 seconds for 100, 3 for 1000, etc.Sweetmeat
R
152

The logarithm

Ok let's try and fully understand what a logarithm actually is.

Imagine we have a rope and we have tied it to a horse. If the rope is directly tied to the horse, the force the horse would need to pull away (say, from a man) is directly 1.

Now imagine the rope is looped round a pole. The horse to get away will now have to pull many times harder. The amount of times will depend on the roughness of the rope and the size of the pole, but let's assume it will multiply one's strength by 10 (when the rope makes a complete turn).

Now if the rope is looped once, the horse will need to pull 10 times harder. If the human decides to make it really difficult for the horse, he may loop the rope again round a pole, increasing it's strength by an additional 10 times. A third loop will again increase the strength by a further 10 times.

enter image description here

We can see that for each loop, the value increases by 10. The number of turns required to get any number is called the logarithm of the number i.e. we need 3 posts to multiple your strength by 1000 times, 6 posts to multiply your strength by 1,000,000.

3 is the logarithm of 1,000, and 6 is the logarithm of 1,000,000 (base 10).

So what does O(log n) actually mean?

In our example above, our 'growth rate' is O(log n). For every additional loop, the force our rope can handle is 10 times more:

Turns | Max Force
  0   |   1
  1   |   10
  2   |   100
  3   |   1000
  4   |   10000
  n   |   10^n

Now the example above did use base 10, but fortunately the base of the log is insignificant when we talk about big o notation.

Now let's imagine you are trying to guess a number between 1-100.

Your Friend: Guess my number between 1-100! 
Your Guess: 50
Your Friend: Lower!
Your Guess: 25
Your Friend: Lower!
Your Guess: 13
Your Friend: Higher!
Your Guess: 19
Your Friend: Higher!
Your Friend: 22
Your Guess: Lower!
Your Guess: 20
Your Friend: Higher!
Your Guess: 21
Your Friend: YOU GOT IT!  

Now it took you 7 guesses to get this right. But what is the relationship here? What is the most amount of items that you can guess from each additional guess?

Guesses | Items
  1     |   2
  2     |   4
  3     |   8
  4     |   16
  5     |   32
  6     |   64
  7     |   128
  10    |   1024

Using the graph, we can see that if we use a binary search to guess a number between 1-100 it will take us at most 7 attempts. If we had 128 numbers, we could also guess the number in 7 attemps but 129 numbers will takes us at most 8 attempts (in relations to logarithms, here we would need 7 guesses for a 128 value range, 10 guesses for a 1024 value range. 7 is the logarithm of 128, 10 is the logarithm of 1024 (base 2)).

Notice that I have bolded 'at most'. Big-O notation always refers to the worse case. If you're lucky, you could guess the number in one attempt and so the best case is O(1), but that's another story.

We can see that for every guess our data set is shrinking. A good rule of thumb to identify if an algorithm has a logarithmtic time is to see if the data set shrinks by a certain order after each iteration

What about O(n log n)?

You will eventually come across a linearithmic time O(n log(n)) algorithm. The rule of thumb above applies again, but this time the logarithmic function has to run n times e.g. reducing the size of a list n times, which occurs in algorithms like a mergesort.

You can easily identify if the algorithmic time is n log n. Look for an outer loop which iterates through a list (O(n)). Then look to see if there is an inner loop. If the inner loop is cutting/reducing the data set on each iteration, that loop is (O(log n)), and so the overall algorithm is = O(n log n).

Disclaimer: The rope-logarithm example was grabbed from the excellent Mathematician's Delight book by W.Sawyer.

Ronrona answered 8/10, 2016 at 14:27 Comment(1)
No. In our example above, our 'growth rate' is O(log n). For every additional loop, the force our rope can handle is 10 times more, supported by a chart which shows n==number of loops and our 'growth rate' => 10^n, which is NOT log n. The example can be made correct by making n=# horses, which requires log n loops to restrain. Poor pedogogical examples produce students who only believe they understand.Traceable
H
114

Logarithmic running time (O(log n)) essentially means that the running time grows in proportion to the logarithm of the input size - as an example, if 10 items takes at most some amount of time x, and 100 items takes at most, say, 2x, and 10,000 items takes at most 4x, then it's looking like an O(log n) time complexity.

Hoodmanblind answered 21/2, 2010 at 20:10 Comment(9)
+1, but you really should point out that it's log2, not log10.Anchoress
log2 or log10 is irrelevant. They only differ by a scale factor, which makes them of the same order, i.e. they still grow at the same rate.Demonize
The fun thing about logarithms is that when comparing relative heights, the exact base you use doesn't matter. log 10,000 / log 100 is 2 regardless of what base you use.Hoodmanblind
To be nitpicky, O(lg n) means that the runtime is at most proportional to lg n. What you describe is Theta(lg n).Vegetal
@rgrig: That is true. I've edited in a few "at mosts" to indicate the upper-bound nature of big-O.Hoodmanblind
@rgrig he described both O and theta: Theta(lg n) implies O(lg n)Rodgers
@klochner: "what is an even number?" "1+1 is an even number" "you described the number 2" "no, he described both even numbers and the number 2 because being 2 implies being even". What's your point?Vegetal
@rgrig - I read your comment as applying to the example rather than the definition. If we're nitpicking, he defined Theta (loosely), and described a function that is both O and ThetaRodgers
@klochner: You win the nitpicking game :) I should have said "define".Vegetal
B
71

First I recommend you to read following book;

Algorithms (4th Edition)

Here is some functions and their expected complexities. Numbers are indicating statement execution frequencies.

Here is some functions and their expected complexities

Following Big-O Complexity Chart also taken from bigocheatsheet Big-O Complexity Chart

Lastly very simple showcase there is shows how it is calculated;

Anatomy of a program’s statement execution frequencies.

Analyzing the running time of a program (example).

Analyzing the running time of a program

Brunelle answered 2/7, 2017 at 20:35 Comment(11)
I wouldn't put O(n log n) in the bad basket. It belongs to the fair one.Acinus
When viewing the big-O complexity chart (above) you have to remember O(n) is actual linear point, not the pink/orange boarder. @Andre That's why O(n log n) is correctly marked in 'bad' performance bracket, it is worse performance than linear.Praxiteles
@Praxiteles correct, while the performance of O(n log n) is technically worse than O(n), refer to the table above, which presents a good comparison of them (see the growth of the two). otoh the chart, from a different source, is contradictory because it puts O(1) and O(log n) in the same good/excellent. their relative order of growth is comparable to O(n) and O(n log n). tl;dr; O(n log n) is not excellent, but is far from bad.Acinus
I agree with @AndréWerlang here. For example Quicksort is considered as a pretty efficient sorting algorithm, and it's average complexity is O(n log n), and it definitely does not deserved to be put in bad basket.Brunelle
This answer is wrong! It assumes that N = N * N. In fact N = N! Your example is actually N cubed. You do the same in your graph. Your O(n) should actually be the divide between horrible and bad. Mathematical proof: You say that for loop is constant with O(1). That's what the 1 really means, not dependent on N. It just means not variable. But it is variable as it's dependent on N. Twice N and half the time. Therefore it's invalid. If it's from that book, don't buy it! The code graphic you've shown isn't real, it's a joke, look, "Theesome", it means three people having sex at once! OMGConstitutionality
Shouldn't O(n) be on the diagonal?Transatlantic
@Bedeck what r you talking about?Brunelle
see what @Constitutionality said (i also agree)Bedeck
What's wrong with the last picture? It is pretty absolute, it shows execution frequencies. Graph is another discussion point, I took it as a rough reference. First pic, just shows single math over different algo executions. This answer is not given as an academic reference.Brunelle
Please check this => algs4.cs.princeton.edu/14analysis/ThreeSum.java.html this example clearly says "A program with cubic running time." The example I gave for just a showcase for calculation.Brunelle
If you make the arrows sensible it'll make it look less like the first loop is log(1) at a glance. Use a plotter instead of the chart shown there. The chart means very little. It's a distortion liable to mislead. n log n still falls into a category of bad for scaling as it gets disproportionately worse the more there is to process. O(n) means the work is proportionate to the amount of items. A subjective scale should come with some explanation. You have a table trying to show exact values (but n log n is n log 10) then you jump to an opinionated chart that's not that table.Constitutionality
D
63

You can think of O(log N) intuitively by saying the time is proportional to the number of digits in N.

If an operation performs constant time work on each digit or bit of an input, the whole operation will take time proportional to the number of digits or bits in the input, not the magnitude of the input; thus, O(log N) rather than O(N).

If an operation makes a series of constant time decisions each of which halves (reduces by a factor of 3, 4, 5..) the size of the input to be considered, the whole will take time proportional to log base 2 (base 3, base 4, base 5...) of the size N of the input, rather than being O(N).

And so on.

Diplomacy answered 21/2, 2010 at 20:13 Comment(4)
Accurate enough and more easily grasped than most explanations, I reckon.Overtire
it's an explanation of log<sub>10</sub> N, is it?Panchito
@LiuYan刘研 they didn't say what base the number of digits was in. In any case though, log₂(n) = log₁₀(n)/log₁₀(2) and 1/log₁₀(2) is hence a constant multiplier, with the same principle applying to all other bases. This shows two things. Firstly that moonshadow's principle applies whatever the base (though the lower the base, the fewer "jags" in the estimate) and also that O(log n) is O(log n) no matter what base the calculation that led you to that conclusion.Anodic
"proportional" ... "each of which halves the size of the input" ??????Bedeck
P
61

What's logb(n)?

It is the number of times you can cut a log of length n repeatedly into b equal parts before reaching a section of size 1.

Pellerin answered 19/3, 2010 at 19:28 Comment(1)
Excellent Comment! It's concise and exactly the answer I am after.Ominous
I
56

The best way I've always had to mentally visualize an algorithm that runs in O(log n) is as follows:

If you increase the problem size by a multiplicative amount (i.e. multiply its size by 10), the work is only increased by an additive amount.

Applying this to your binary tree question so you have a good application: if you double the number of nodes in a binary tree, the height only increases by 1 (an additive amount). If you double it again, it still only increased by 1. (Obviously I'm assuming it stays balanced and such). That way, instead of doubling your work when the problem size is multiplied, you're only doing very slightly more work. That's why O(log n) algorithms are awesome.

Inhibition answered 22/2, 2010 at 19:51 Comment(1)
Since 2^x is the inverse of log, then would the following be true? If you increase the problem size by [an additive amount], then the work is increased by a multiplicative amount.Mccloud
C
25

Divide and conquer algorithms usually have a logn component to the running time. This comes from the repeated halving of the input.

In the case of binary search, every iteration you throw away half of the input. It should be noted that in Big-O notation, log is log base 2.

Edit: As noted, the log base doesn't matter, but when deriving the Big-O performance of an algorithm, the log factor will come from halving, hence why I think of it as base 2.

Chokedamp answered 21/2, 2010 at 20:11 Comment(6)
Why is it log base 2? In randomized quicksort for example, I don't think it is base 2. As far as i know, the base doesn't matter, as log base a (n) = log2 (n) / log2 (a), so every logarithm is different from another by a constant, and constants are ignored in big-o notation. In fact, writing the base of a log in big-o notation is a mistake in my opinion, as you are writing a constant.Torritorricelli
Re "log is log base 2": https://mcmap.net/q/40860/-is-big-o-logn-log-base-e/…Kaseykasha
Very true that it can be converted to any base and it does not matter, but if you are trying to derive the Big-O performance and you see constant halving, it helps to understand that you wont see log base 10 reflected in the code.Chokedamp
An aside: In things such as B-trees, where nodes have a fan-out of more than 2 (i.e. "wider" than a binary tree), you'll still see O(logn) growth, because it's still divide-and-conquer, but the base of the log will be related to the fan-out.Montespan
The digression on log 2 was quite helpful, actually.School
If we use lg for base 2 and log for base 10. Then lg N = log N / log 2. Notice that 1/log2 is a constant. Therefore O(log N) = O(lg N)Photooffset
E
17

O(log n) is a bit misleading, more precisely it's O(log2 n), i.e. (logarithm with base 2).

The height of a balanced binary tree is O(log2 n), since every node has two (note the "two" as in log2 n) child nodes. So, a tree with n nodes has a height of log2 n.

Another example is binary search, which has a running time of O(log2 n) because at every step you divide the search space by 2.

Effluence answered 21/2, 2010 at 20:12 Comment(3)
O(log n) is the same order as O(ld n) or O(LN n). They are proportional. I understand that for learning purposes it's easier to use ld.Rowen
"more precisely it's O(ld n)" - No, it isn't: all logs are the same order (each differing from the others only by some constant scaling factor, which is ignored/ignorable).Brink
you're right chris, very bad wording. should have said it as helios did. it helps for learning/understanding but finally all logs are the same order.Effluence
D
16

But what exactly is O(log n)? For example, what does it mean to say that the height of a >complete binary tree is O(log n)?

I would rephrase this as 'height of a complete binary tree is log n'. Figuring the height of a complete binary tree would be O(log n), if you were traversing down step by step.

I cannot understand how to identify a function with a logarithmic time.

Logarithm is essentially the inverse of exponentiation. So, if each 'step' of your function is eliminating a factor of elements from the original item set, that is a logarithmic time algorithm.

For the tree example, you can easily see that stepping down a level of nodes cuts down an exponential number of elements as you continue traversing. The popular example of looking through a name-sorted phone book is essentially equivalent to traversing down a binary search tree (middle page is the root element, and you can deduce at each step whether to go left or right).

Dashtilut answered 26/5, 2013 at 8:35 Comment(1)
+1 for mentioning "Logarithm is essentially the inverse of exponentiation".Structuralism
C
13

O(log n) refers to a function (or algorithm, or step in an algorithm) working in an amount of time proportional to the logarithm (usually base 2 in most cases, but not always, and in any event this is insignificant by big-O notation*) of the size of the input.

The logarithmic function is the inverse of the exponential function. Put another way, if your input grows exponentially (rather than linearly, as you would normally consider it), your function grows linearly.

O(log n) running times are very common in any sort of divide-and-conquer application, because you are (ideally) cutting the work in half every time. If in each of the division or conquer steps, you are doing constant time work (or work that is not constant-time, but with time growing more slowly than O(log n)), then your entire function is O(log n). It's fairly common to have each step require linear time on the input instead; this will amount to a total time complexity of O(n log n).

The running time complexity of binary search is an example of O(log n). This is because in binary search, you are always ignoring half of your input in each later step by dividing the array in half and only focusing on one half with each step. Each step is constant-time, because in binary search you only need to compare one element with your key in order to figure out what to do next irregardless of how big the array you are considering is at any point. So you do approximately log(n)/log(2) steps.

The running time complexity of merge sort is an example of O(n log n). This is because you are dividing the array in half with each step, resulting in a total of approximately log(n)/log(2) steps. However, in each step you need to perform merge operations on all elements (whether it's one merge operation on two sublists of n/2 elements, or two merge operations on four sublists of n/4 elements, is irrelevant because it adds to having to do this for n elements in each step). Thus, the total complexity is O(n log n).

*Remember that big-O notation, by definition, constants don't matter. Also by the change of base rule for logarithms, the only difference between logarithms of different bases is a constant factor.

Cariole answered 21/2, 2010 at 20:19 Comment(1)
The final * note solved my confusion about logarithms being based on 2 or 10 :) Thanks a lot.Flea
D
12

These 2 cases will take O(log n) time

case 1: f(int n) {
      int i;
      for (i = 1; i < n; i=i*2)
        printf("%d", i);
    }


 case 2  : f(int n) {
      int i;
      for (i = n; i>=1 ; i=i/2)
        printf("%d", i);
    }
Determined answered 5/7, 2013 at 3:43 Comment(3)
I'm sure I'm missing something, but wouldn't i always be zero and the loops run forever in both of those cases, since 0*2=0 and 0/2=0?Sycamine
@dj_segfault, that was my mistake.I think now it does make sense..:)Determined
@RaviBisla Other answers state that an input of 10 would take 1 time as much as 10 loops, and an input of 100 would take 3 times the input time of 1, thats definetly not the case with those examples. https://mcmap.net/q/40462/-what-does-o-log-n-mean-exactlyTony
A
11

It simply means that the time needed for this task grows with log(n) (example : 2s for n = 10, 4s for n = 100, ...). Read the Wikipedia articles on Binary Search Algorithm and Big O Notation for more precisions.

Accompany answered 21/2, 2010 at 20:10 Comment(0)
S
10

Simply put: At each step of your algorithm you can cut the work in half. (Asymptotically equivalent to third, fourth, ...)

Savdeep answered 22/2, 2010 at 16:41 Comment(1)
This answer is very imprecise. First of all, you can think of cutting the work in half only in the case of the logarithm in base 2. It's really incredible how this answer (and most of the answers to the original question) received so many up-votes. "(Asymptotically equivalent to third, fourth, ...)"? Why answering a question if you don't have time?Luehrmann
C
8

If you plot a logarithmic function on a graphical calculator or something similar, you'll see that it rises really slowly -- even more slowly than a linear function.

This is why algorithms with a logarithmic time complexity are highly sought after: even for really big n (let's say n = 10^8, for example), they perform more than acceptably.

Camshaft answered 21/2, 2010 at 20:11 Comment(0)
I
7

I can add something interesting, that I read in book by Kormen and etc. a long time ago. Now, imagine a problem, where we have to find a solution in a problem space. This problem space should be finite.

Now, if you can prove, that at every iteration of your algorithm you cut off a fraction of this space, that is no less than some limit, this means that your algorithm is running in O(logN) time.

I should point out, that we are talking here about a relative fraction limit, not the absolute one. The binary search is a classical example. At each step we throw away 1/2 of the problem space. But binary search is not the only such example. Suppose, you proved somehow, that at each step you throw away at least 1/128 of problem space. That means, your program is still running at O(logN) time, although significantly slower than the binary search. This is a very good hint in analyzing of recursive algorithms. It often can be proved that at each step the recursion will not use several variants, and this leads to the cutoff of some fraction in problem space.

Intracranial answered 4/2, 2014 at 12:52 Comment(0)
C
7

Actually, if you have a list of n elements, and create a binary tree from that list (like in the divide and conquer algorithm), you will keep dividing by 2 until you reach lists of size 1 (the leaves).

At the first step, you divide by 2. You then have 2 lists (2^1), you divide each by 2, so you have 4 lists (2^2), you divide again, you have 8 lists (2^3)and so on until your list size is 1

That gives you the equation :

n/(2^steps)=1 <=> n=2^steps <=> lg(n)=steps

(you take the lg of each side, lg being the log base 2)

Cletuscleve answered 25/11, 2016 at 18:50 Comment(3)
Until some malware begins to insert a new list with x length at two levels before the leaves nodes. Then it will seem to be an infinite loop...Hearing
I didn't get your comment. Is my explanation wrong ?Cletuscleve
I was only making a hypothetical joke. I wasn't really meaning anything by it.Hearing
F
6

The complete binary example is O(ln n) because the search looks like this:

1 2 3 4 5 6 7 8 9 10 11 12

Searching for 4 yields 3 hits: 6, 3 then 4. And log2 12 = 3, which is a good apporximate to how many hits where needed.

Flyman answered 21/2, 2010 at 20:11 Comment(2)
thanks for the example. It clearly says how our algorithm can use logarithmic time in divide and conquer method.Voight
So if its a loop of n/2 its always log(n)?Croak
C
6

Tree

log x to base b = y is the inverse of b^y = x

If you have an M-ary tree of depth d and size n, then:

  • traversing the whole tree ~ O(M^d) = O(n)

  • Walking a single path in the tree ~ O(d) = O(log n to base M)

Ceraceous answered 28/5, 2013 at 7:7 Comment(0)
S
6

I can give an example for a for loop and maybe once grasped the concept maybe it will be simpler to understand in different contexts.

That means that in the loop the step grows exponentially. E.g.

for (i=1; i<=n; i=i*2) {;}

The complexity in O-notation of this program is O(log(n)). Let's try to loop through it by hand (n being somewhere between 512 and 1023 (excluding 1024):

step: 1   2   3   4   5    6    7    8     9     10
   i: 1   2   4   8   16   32   64   128   256   512

Although n is somewhere between 512 and 1023, only 10 iterations take place. This is because the step in the loop grows exponentially and thus takes only 10 iterations to reach the termination.

The logarithm of x (to the base of a) is the reverse function of a^x.

It is like saying that logarithm is the inverse of exponential.

Now try to see it that way, if exponential grows very fast then logarithm grows (inversely) very slow.

The difference between O(n) and O(log(n)) is huge, similar to the difference between O(n) and O(a^n) (a being a constant).

Superabundant answered 16/5, 2015 at 4:27 Comment(0)
Q
6

Every time we write an algorithm or code we try to analyze its asymptotic complexity. It is different from its time complexity.

Asymptotic complexity is the behavior of execution time of an algorithm while the time complexity is the actual execution time. But some people use these terms interchangeably.

Because time complexity depends on various parameters viz.
1. Physical System
2. Programming Language
3. coding Style
4. And much more ......

The actual execution time is not a good measure for analysis.


Instead we take input size as the parameter because whatever the code is, the input is same. So the execution time is a function of input size.

Following is an example of Linear Time Algorithm


Linear Search
Given n input elements, to search an element in the array you need at most 'n' comparisons. In other words, no matter what programming language you use, what coding style you prefer, on what system you execute it. In the worst case scenario it requires only n comparisons.The execution time is linearly proportional to the input size.

And its not just search, whatever may be the work (increment, compare or any operation) its a function of input size.

So when you say any algorithm is O(log n) it means the execution time is log times the input size n.

As the input size increases the work done(here the execution time) increases.(Hence proportionality)

      n      Work
      2     1 units of work
      4     2 units of work
      8     3 units of work

See as the input size increased the work done is increased and it is independent of any machine. And if you try to find out the value of units of work It's actually dependent onto those above specified parameters.It will change according to the systems and all.

Quitt answered 9/2, 2018 at 5:59 Comment(1)
"time complexity" does not mean "actual execution time". This answer is simply wrong.Mattingly
D
5

In information technology it means that:

  f(n)=O(g(n)) If there is suitable constant C and N0 independent on N, 
  such that
  for all N>N0  "C*g(n) > f(n) > 0" is true.

Ant it seems that this notation was mostly have taken from mathematics.

In this article there is a quote: D.E. Knuth, "BIG OMICRON AND BIG OMEGA AND BIG THETA", 1976:

On the basis of the issues discussed here, I propose that members of SIGACT, and editors of computer science and mathematics journals, adopt notations as defined above, unless a better alternative can be found reasonably soon.

Today is 2016, but we use it still today.


In mathematical analysis it means that:

  lim (f(n)/g(n))=Constant; where n goes to +infinity

But even in mathematical analysis sometimes this symbol was used in meaning "C*g(n) > f(n) > 0".

As I know from university the symbol was intoduced by German mathematician Landau (1877-1938)

Ditto answered 2/4, 2014 at 11:37 Comment(0)
M
5

O(logn) is one of the polynomial time complexity to measure the runtime performance of any code.

I hope you have already heard of Binary search algorithm.

Let's assume you have to find an element in the array of size N.

Basically, the code execution is like N N/2 N/4 N/8....etc

If you sum all the work done at each level you will end up with n(1+1/2+1/4....) and that is equal to O(logn)

Maltreat answered 30/5, 2019 at 12:5 Comment(2)
How on earth log(n) equals n?!Deckard
This is totally wrong.Mattingly
B
3

If you are looking for a intuition based answer I would like to put up two interpretations for you.

  1. Imagine a very high hill with a very broad base as well. To reach the top of the hill there are two ways: one is a dedicated pathway going spirally around the hill reaching at the top, the other: small terrace like carvings cut out to provide a staircase. Now if the first way is reaching in linear time O(n), the second one is O(log n).

  2. Imagine an algorithm, which accepts an integer, n as input and completes in time proportional to n then it is O(n) or theta(n) but if it runs in time proportion to the number of digits or the number of bits in the binary representation on number then the algorithm runs in O(log n) or theta(log n) time.

Bos answered 26/5, 2013 at 16:56 Comment(1)
please edit. has "O(n) or theta(n)" in both scenarios...? Also, I've heard this a lot, the size vs the # digits. Are we saying size === 128 for n=10000000 and digits === 8 for n=10000000? Please elucidate.Mika
T
2

Algorithms in the Divide and Conquer paradigm are of complexity O(logn). One example here, calculate your own power function,

int power(int x, unsigned int y)
{
    int temp;
    if( y == 0)
        return 1;
    temp = power(x, y/2);
    if (y%2 == 0)
        return temp*temp;
    else
        return x*temp*temp;
}

from http://www.geeksforgeeks.org/write-a-c-program-to-calculate-powxn/

Thar answered 27/6, 2015 at 8:20 Comment(0)
J
1

I would like to add that the height of the tree is the length of the longest path from the root to a leaf, and that the height of a node is the length of the longest path from that node to a leaf. The path means the number of nodes we encounter while traversing the tree between two nodes. In order to achieve O(log n) time complexity, the tree should be balanced, meaning that the difference of the height between the children of any node should be less than or equal to 1. Therefore, trees do not always guarantee a time complexity O(log n), unless they are balanced. Actually in some cases, the time complexity of searching in a tree can be O(n) in the worst case scenario.

You can take a look at the balance trees such as AVL tree. This one works on balancing the tree while inserting data in order to keep a time complexity of (log n) while searching in the tree.

Jingo answered 1/12, 2013 at 23:52 Comment(0)
P
-5

Big O notations For reference. this may help !

Big O--------------- Sorting-------------- Complexity

O(log N)     -Binary search      - logarithmic

O(N)         -Simple search      - Linear

O(N*log N)   -Quicksort          - loglinear 

O(2^N)       -recursive          - Exponential

O(N2)        -Selection sort     - directly proportional to the square of the input size.
Postmeridian answered 30/8, 2022 at 12:39 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.