Algorithms with superexponential runtime?
Asked Answered
V

5

21

I was talking with a student the other day about the common complexity classes of algorithms, like O(n), O(nk), O(n lg n), O(2n), O(n!), etc. I was trying to come up with an example of a problem for which solutions whose best known runtime is super-exponential, such as O(22n), but still decidable (e.g. not the halting problem!) The only example I know of is satisfiability of Presburger arithmetic, which I don't think any intro CS students would really understand or be able to relate to.

My question is whether there is a well-known problem whose best known solution has runtime that is superexponential; at least ω(n!) or ω(nn). I would really hope that there is some "reasonable" problem meeting this description, but I'm not aware of any.

Vaporization answered 6/3, 2011 at 10:29 Comment(1)
Interesting question. Going beyond that: is there any program which is guaranteed to terminate for any finite combination of input symbols, but whose worst-case time and space requirements for a given input size cannot be computed in bounded time? Note that restricting the question to "any valid input" would render the question meaningless: if the validity of input can be determined in valid time, then termination would be guaranteed for all inputs; if it can't, then any undecidable problem would complete for those inputs where it completes and wouldn't for those where it doesn't.Corder
A
9

Maximum Parsimony is the problem of finding an evolutionary tree connecting n DNA sequences (representing species) that requires the fewest single-nucleotide mutations. The n given sequences are constrained to appear at the leaves; the tree topology and the sequences at internal nodes are what we get to choose.

In more CS terms: We are given a bunch of length-k strings that must appear at the leaves of some tree, and we have to choose a tree, plus a length-k string for each internal node in the tree, so as to minimise the sum of Hamming distances across all edges.

When a fixed tree is also given, the optimal assignment of sequences to internal nodes can be determined very efficiently using the Fitch algorithm. But in the usual case, a tree is not given (i.e. we are asked to find the optimal tree), and this makes the problem NP-hard, meaning that every tree must in principle be tried. Even though an evolutionary tree has a root (representing the hypothetical ancestor), we only need to consider distinct unrooted trees, since the minimum number of mutations required is not affected by the position of the root. For n species there are 3 * 5 * 7 * ... * (2n-5) leaf-labelled unrooted binary trees. (There is just one such tree with 3 species, which has a single internal vertex and 3 edges; the 4th species can be inserted at any of the 3 edges to produce a distinct 5-edge tree; the 5th species can be inserted at any of these 5 edges, and so on -- this process generates all trees exactly once.) This is sometimes written (2n-5)!!, with !! meaning "double factorial".

In practice, branch and bound is used, and on most real datasets this manages to avoid evaluating most trees. But highly "non-treelike" random data requires all, or almost all (2n-5)!! trees to be examined -- since in this case many trees have nearly equal minimum mutation counts.

Allistir answered 6/3, 2011 at 12:20 Comment(4)
@j_random_hacker- My understanding is that n!! < n!, since double-factorial means n(n-2)(n-4)...(1), rather than (n!)!. Is this incorrect?Vaporization
You're right. But bear in mind that there is a 2n (rather than just an n) under the double factorial sign for this problem, and (2n)!! > n! for all n > 0. Actually I'm not sure what the tightest "canonical" big-O complexity measure for this is.Allistir
@j_random_hacker- I think a "canonical" big-O for this would be O(2^n n!), since each term of the double-factorial is just over twice as large as its counterpart in the regular factorial and there's n of them. This looks like a great answer; thanks for posting it!Vaporization
Thanks! Your big-O expression looks dead-on to me.Allistir
I
7

Showing all permutation of string of length n is n!, finding Hamiltonian cycle is n!, minimum graph coloring, ....

Edit: even faster Ackerman functions. In fact they seems without bound function.

A(x,y) = y+1 (if x = 0)
A(x,y) = A(x-1,1) (if y=0)
A(x,y) = A(x-1, A(x,y-1)) otherwise.

from wiki:
A(4,3) = 2^2^65536,...
Ind answered 6/3, 2011 at 10:37 Comment(4)
All permutations runs in O(n n!), as does finding a Hamiltonian cycle. This is indeed w(n!), but what I was hoping for was something even faster-growing than this. Thanks for posting this, though; you're absolutely right!Vaporization
Hamilton cycle and graph coloring have exponential time DPs.Lecialecithin
@user635541, I don't know any exponential time DP for HC ( and I think there isn't), but if there is, it's big enough, Also After OP wants better example I suggest ackerman function.Ind
@user635541, yes you are right, HC can have a simple DP with running time 2^n*poly(n) and may be there are even better algorithms by use of graph decompositions.Ind
A
5

Do algorithms to compute real numbers to a certain precision count? The formula for the area of the Mandelbrot set converges extremely slowly; 10118 terms for two digits, 101181 terms for three.

Adjutant answered 6/3, 2011 at 11:0 Comment(0)
S
1

This is not a practical everyday problem, but it's a way to construct relatively straightforward problems of increasing complexity.

  • The Kolmogorov complexity K(x) is the size of the smallest program that outputs the string $x$ on a pre-determined universal computer U. It's easy to show that most strings cannot be compressed at all (since there are more strings of length n than programs of length n).
  • If we give U a maximum running time (say some polynomial function P), we get a time-bounded Kolmogorov complexity. The same counting argument holds: there are some strings that are incompressible under this time bounded Kolmogorov complexity. Let's call the first such string (of some length n) xP
  • Since the time-bounded Kolmogorov complexity is computable, we can test all strings, and find xP
  • Finding xP can't be done in polynomial time, or we could use this algorithm to compress it, so finding it must be a super-polynomial problem. We do know we can find it in exp(P) time, though. (Jumping over some technical details here)
  • So now we have a time-bound E = exp(P). We can repeat the procedure to find xE, and so on.

This approach gives us a decidable super-F problem for every time-constructible function F: find the first string of length n (some large constant) that is incompressible under time-bound F.

Sogdian answered 8/1, 2015 at 9:31 Comment(0)
A
1

Two elementary results in the conversion of regular expressions of complements of regular languages involve truly superexponential runtime. See this Computer Science question for more details.

The definition of 'superexponential'

There are various differeing definitions of exponential time.

  1. The most restrictive definition defines exponential time as being upper bounded by a simple exponential function Bx for some B > 1. This defines the E complexity class.
  2. The most conventional definition, EXPTIME, consists of all algorithms with runtime requiring only a single exponent to express. Namely, it is of the form Bg(x) where g is polynomial in x. Polynomials do not require any exponents to express.
    • Factorial time n!, which asymptotically dominates any simple exponential function, is bounded above by nn, which can be converted to en ln n. Therefore, the expression nn is still singly exponential, and factorial time still qualifies under EXPTIME.
    • The function 22x is a doubly exponential function. This function cannot be expressed using two font sizes. This function is outside of EXPTIME, but is in 2-EXPTIME.
  3. The least restrictive definition, ELEMENTARY, consists of all algorithms with runtime being an elementary expression, which may contain any number of nested exponents. It is a union of all k-EXPTIME time complexities.
    • Tetration is an example of a nonelementary primitive recursive operation. The same applies for all higher hyperoperations.

If you are thinking about any algorithm, including those with little practical value, the Ackermann function and the Knuth-up-arrow function comes into mind. This is among the first algorithms known to be non-primitive recursive, also it is asymptotically faster than any primitive recursive function.

Computing a Knuth-up arrow function A(b,x,n) = b n x for nontrivial base b and 'superexponent' x requires 'super-primitive recursive' time, also of the order of the (n + 2)-th hyperoperation.

Two regular expression conversion problems

A regular expression R (not a POSIX regex) is a template that specifies a match pattern in text. Let L be the set of all text strings of the form described by R. They are used to determine whether a string is valid or not as an input in constant space; the regular expression ^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$ describes a valid email address.

Kleene's theorem: It is well-known that for every regular language L in some finite alphabet Σ, there exists a regular expression R and a deterministic finite-state machine M that produces exactly the language L.

A double exponential problem

From Klenne's theorem, the complement of the regular language ∁L is another regular language. The DFA M- for ∁L is identical to the DFA M for L except that the states being accepting or nonaccepting are all inverted. Derive the regular expression for ∁L without the negation bar.

Answer: This problem is surprisingly nasty. The regular expression for the complement language is doubly exponential to the length of the regular expression of the base language. A proof has been provided on Page 8 of an Arxiv article by Wouter Gelade and Frank Neven.

A superexponential algorithm

Definition: A generalized regular expression is a regular expression that also allows the complementaion operator, allowing sections that is not of the regex form to precede or follow sections that are of the regex form.

Letting ¬ as the complementation operator, an example is ^¬((0|1)*00(0|1)*)[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$, which describes all strings that can be partitioned into a part that do not contain two zeroes in a row and a part that represents an email address.

Generalized regular expressions have the exact same strength as standard regular expressions without complementation, but can be used to succinctly represent a regular language containing a part that is described as the complement of a regular language with a short regular expression.

Your task is to convert the generalized regular expression into a standard regular expression.

Answer: You can recursively convert each complemented expression into a NFA, and then convert each NFA into a DFA. However, the running time of this algorithm is nonelementart 2 ↑↑ Θ(n), since each layer of recursive negation exponentially increases the number of states of the DFA. In fact, this problem still remains nonelementary if Kleene closure is disallowed. (source: Algorithms by Jeff Erickson)

Arva answered 14/1, 2024 at 1:56 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.