Algorithms for Big O Analysis
Asked Answered
A

6

7

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?

Abhorrent answered 23/2, 2009 at 7:39 Comment(0)
S
16

I have (quite) a few examples:

  • The union-find data structure, which supports operations in (amortized) inverse Ackermann time. It's particularly nice because the data structure is incredibly easy to code.
  • Splay trees, which are self-balancing binary trees (that is, no extra information is stored other than the BST -- no red/black information. Amortized analysis was essentially invented to prove bounds for splay trees; splay trees run in amortized logarithmic time, but worst-case linear time. The proofs are cool.
  • Fibonacci heaps, which perform most of the priority queue operations in amortized constant time, thus improving the runtime of Dijkstra's algorithm and other problems. As with splay trees, there are slick "potential function" proofs.
  • Bernard Chazelle's algorithm for computing minimum spanning trees in linear times inverse Ackermann time. The algorithm uses soft heaps, a variant of the traditional priority queue, except that some "corruption" might occur and queries might not be answered correctly.
  • While on the topic of MSTs: an optimal algorithm has been given by Pettie and Ramachandran, but we don't know the running time!
  • Lots of randomized algorithms have interested analyses. I'll only mention one example: Delaunay triangulation can be computed in expected O(n log n) time by incrementally adding points; the analysis is apparently intricate, though I haven't seen it.
  • Algorithms that use "bit tricks" can be neat, e.g. sorting in O(n log log n) time (and linear space) -- that's right, it breaks the O(n log n) barrier by using more than just comparisons.
  • Cache-oblivious algorithms often have interesting analyses. For example, cache-oblivious priority queues (see page 3) use log log n levels of sizes n, n2/3, n4/9, and so on.
  • (Static) range-minimum queries on arrays are neat. The standard proof tests your limits with respect to reduction: range-minimum queries is reduced to least common ancestor in trees, which is in turn reduced to a range-minimum queries in a specific kind of arrays. The final step uses a cute trick, too.
Spreadeagle answered 23/2, 2009 at 8:14 Comment(0)
W
2

Ackermann's function.

Wendy answered 23/2, 2009 at 7:43 Comment(1)
@p This answer is probably fine, even though it's link-only. "Ackermann's function" is enough to find it on the internet and the question—which is probably off-topic—is only soliciting algorithms, not explanations.Wira
B
2

This one is kinda simple but Comb Sort blows my mind a little.

http://en.wikipedia.org/wiki/Comb_sort

It is such a simple algorithm for the most part it reads like an overly complicated bubble sort, but it is O(n*Log[n]). I find that mildly impressive.

The plethora of Algorithms for Fast Fourier Transforms are impressive too, the math that proves their validity is trippy and it was fun to try to prove a few on my own.

http://en.wikipedia.org/wiki/Fast_Fourier_transform

I can fairly easily understand the prime radix, multiple prime radix, and mixed radix algorithms but one that works on sets whose size are prime is quite cool.

Bougainville answered 23/2, 2009 at 7:50 Comment(0)
L
2

2D ordered search analysis is quite interesting. You've got a 2-dimensional numeric array of numbers NxN where each row is sorted left-right and each column is sorted top-down. The task is to find a particular number in the array.

The recursive algorithm: pick the element in the middle, compare with the target number, discard a quarter of the array (depending on the result of the comparison), apply recursively to the remainig 3 quarters is quite interesting to analyze.

Lucielucien answered 24/2, 2009 at 8:46 Comment(0)
L
1

Non-deterministically polynomial complexity gets my vote, especially with the (admittedly considered unlikely) possibility that it may turn out to be the same as polynomial. In the same vein, anything that can theoretically benefit from quantum computing (N.B. this set is by no means all algorithms).

The other that would get my vote would be common mathematical operations on arbitrary-precision numbers -- this is where you have to consider things like multiplying big numbers is more expensive than multiplying small ones. There is quite a lot of analysis of this in Knuth (which shouldn't be news to anyone). Karatsuba's method is pretty neat: cut the two factors in half by digit (A1;A2)(B1;B2) and multiply A1 B1, A1 B2, A2 B1, A2 B2 separately, and then combine the results. Recurse if desired...

Larger answered 23/2, 2009 at 7:48 Comment(3)
Karatsuba's method is nifty, this is true. However since a fast fourier transform performs a convolution it can be used to perform the fastest multiplications known, assuming the numbers are large enough to justify the aggravation of writing a mixed input size FFT and tuning it.Bougainville
Hmm, I'll have to look that up. How precise do the components have to to guarantee accurate integer multiplication?Larger
As I understand it you simply need enough precision to hold the final number. Though the FFT multiplication technique can also be used on integer types directly and there is always gives accurate answers. I don't have my copy of Knuth here but I think he mentions the technique and goes over it.Bougainville
P
0

Shell sort. There are tons of variants with various increments, most of which have no benefits except to make the complexity analysis simpler.

Portend answered 30/7, 2009 at 14:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.