Turing complete and parallel programming (true concurrency)
Asked Answered
D

2

7

I often see people say if you can do X in some language you can do Y in another language which is the Turing Complete argument. So You'll have often (usually in a snide comment) "sure you can do t with y because y is also Turing complete.

I took CS theory a long time ago but I don't think this is always true because I'm not sure where Turing fits into concurrency. For example there are programming languages with the right hardware you can execute things to happen exactly at the same time but others where that is not possible.

I understand this is probably more of a hardware/driver issue than the language but I'm curious if or how does concurrency change what it is to be Turing Complete? Can be you be more than Turing Complete?

EDIT: The original reason that I asked this question was in large part due to quantum computing. Although the accepted answer doesn't say this but quantum computing is (ostensible) a subset of turing.

Docilu answered 28/8, 2011 at 0:10 Comment(7)
AFAIK "Turing Complete" itself doesn't say anything about concurrency... thus it is possible to implement concurrency in a Turing Complete language - and this is where the argument becomes fuzzy... I don't think there is "the one correct answer" to what you describe...Nena
There's actually a separate site for questions about theoretical computer science: cstheory.stackexchange.com. I've flagged this post for migration.Circinate
Applogies. I was not aware of that site.Docilu
@Jeremy: cstheory is for "research-level questions" onlySilicone
@sepp2k: Oh, my mistake. In that case, do you think this question is on-topic for SO?Circinate
SO has quite a few questions on automata, grammars, etc. Unless they're all off-topic, I would think that this question is acceptable. It used the "theory" tag. What else is that tag for?Hiphuggers
@JeremyBanks This question is too basic for Theoretical Computer Science; more advanced questions on the topic would work there, but generally speaking, if you aren't at least doing a PhD, CSTheory isn't the site for you. This question is borderline for SO, which is uneasy with such theoretical questions (what, no code?). There is a proposed computer science site where this question would fit well.Pontificate
H
7

This is a confusing topic for many people; you're not alone. The issue is that there are two different definitions of "possible" in play here. One definition of "possible" is how you're using it: is it possible to do concurrency, is it possible to operate a giant robot using the language, is it possible to make the computer enjoy strawberries, etc. This is the layman's definition of "possible".

Turing completeness has nothing to do with what's possible in the above sense. Certainly, concurrency isn't possible everywhere because (for at least some definition of concurrency) it needs to be the case that the language can produce code that can run on two or more different processors simultaneously. A language that can only compile to code that will run on a single processor thus would not be capable of concurrency. It could still be Turing-complete, however.

Turing completeness has to do with the kinds of mathematical functions that can be computed on a machine given enough memory and running time. For purposes of evaluating mathematical functions, a single processor can do everything multiple processors can because it can emulate multiple processors by executing one processor's logic after the other. The (unproven and unprovable, though falsifiable) statement that all mathematical functions that could be computed on any device are computable using a Turing machine is the so-called Church-Turing thesis.

A programming language is called Turing-complete if you can prove that you can emulate any Turing machine using it. Combining this with the Church-Turing thesis, this implies that the programming language is capable of evaluating every type of mathematical function that any device could evaluate, given enough time and memory. Most languages are Turing-complete because this only requires the capacity to allocate dynamic arrays and use loops and if-statements as well as some basic data operations.

Going in the other direction, since a Turing machine can be constructed to emulate any processor we currently know of and programming languages do what they do using processors, no current programming language has more expressive power than a Turing machine, either. So the computation of mathematical functions is equally possible across all common programming languages. Functions computable in one are computable in another. This says nothing about performance - concurrency is essentially a performance optimization.

Hiphuggers answered 28/8, 2011 at 0:52 Comment(6)
I would say "emulate a universal Turing machine", or "emulate any Turing machine" - there are lots of simple Turing machines that can be emulated by, e.g., finite state machines.Infiltrate
Are there any known things (mathematical constructs) that can not be represented by a Turing machine?Docilu
There are uncomputable numbers. Turing came up with the Turing machine concept as a very simple model of all computation in order to solve the "decision problem"; he showed that there must be some functions that no algorithm could compute.Infiltrate
@Adam That's a very good question. People that are unfamiliar with these topics are often surprised that the answer is yes. For one (reasonably) simple example of something that cannot be computed by any Turing machine, see the Wikipedia article on the halting problemHiphuggers
Haha funny right when I read "Halting" a burst of CS Theory memory came back. Its slowly coming back. Thank you wikipedia. Its sad that I have forgotten so much. I will say the halting problem reminds me of Russell's Paradox (ironic that I remember that).Docilu
There's a similarity between the halting problem and Russell's Paradox. I've had professors who have said that it's essentially the same argument, though I disagree with that. Turing's argument is somewhat more similar to Cantor's diagonalization argument.Hiphuggers
W
3

Yes and no. There is no known model of computation that can do things that Turing machines can do and still be called computation, as opposed to magic¹. Hence, in that sense, there is nothing beyond Turing completeness.

On the other hand, you may be familiar with the saying that “there is no problem that cannot be solved by adding a layer of indirection”. So we might want to distinguish between models of computation that map directly to Turing machines and models of computation that require a level of indirection. “Adding a level of indirection” is not a precise mathematical concept in general, but on many specific cases you can observe the level of indirection. Often the easiest way to prove that some paradigm of computation is Turing-computable is to write an interpreter for it on a Turing machine, and that is exactly a level of indirection.

So let's have a look at what it means to model concurrency. You mention the ability to “execute things to happen exactly at the same time”. That's a specific kind of concurrency, called parallelism, and as far as concurrency goes it's a highly restricted model. The world of concurrency is a lot wilder than this. Nonetheless, parallelism already allows things that require some form of indirection when modeled on a Turing machine.

Consider the following problem: given computer programs A and B (passed on the tape of a universal Turing machine), execute them both, and return the result of either program; your program must terminate unless both A and B are non-terminating. In a purely sequential world, you can execute A and return the result; or you can execute B and return the result. But if you start by executing A, and it happens to be a non-terminating program while B does terminate, then your execution strategy does not solve the problem. And similarly, if you start by executing B, your execution strategy does not solve the problem because B might not terminate even if A does.

Given that it is undecidable whether A or B terminates, you cannot base your decision of which one to execute first on that. However, there is a very simple way to modify your Turing machine to execute the programs in parallel: put A and B on separate tapes, duplicate your automaton, and execute one step of each program until one of the two terminates. By adding this level of processing, you can solve the parallel execution problem.

Solving this problem only required a slight modification to the model (it is easy to model a dual-tape Turing machine with a single-tape machine). I nonetheless mention it because it is an important example in [lambda calculus](http://en.wikipedia.org/wiki/Lambda calculus), another important model of computation. The operation of reducing (evaluating) two lambda-terms in parallel until one of them reaches a normal form (terminates) is called Plotkin's parallel or. It is known that it is not possible to write a lambda term (a lambda calculus program) that implements parallel or. Hence lambda calculus is said to be “inherently sequential”.

The reason I mention the lambda calculus here is that most programming languages are closer to the lambda calculus than they are to programming machine. So as a programmer, insights from the lambda calculus are often more important than insights from Turing machines. The example of parallel or shows that adding concurrency to a language² can open possibilities that are not available in the original language.

It is possible to add concurrency to a sequential language through essentially the same trick as on Turing machines: execute a small piece of thread A, then a small piece of thread B, and so on. In fact, if you don't do that in your language, the operating system's kernel can typically do it for you. Strictly speaking, this provides concurrent execution of threads, but still using a single processor.

As a theoretical model, this kind of threaded execution suffers the limitation that it is deterministic. Indeed, any system that can be modeled directly on Turing machines is deterministic. When dealing with concurrent systems, it is often important to be able to write non-deterministic programs. Often the exact order in which the multiple threads of computation are interleaved is irrelevant. So two programs are equivalent if they do essentially the same computation, but in a slightly different order. You can make a model of concurrent computation out of a model of sequential computation by looking at sets of possible interleavings instead of single program runs, but that adds a level of indirection that is difficult to manage. Hence most models of concurrency bake nondeterminism into the system. When you do that, you can't run on a Turing machine any more.

¹ In this respect, thought (what happens in our brain) is still magic in the sense that we have no idea how it's done, we don't have a scientific understanding of it. Anything we know how to reproduce (not in the biological sense!) is Turing-computable.
² Note that here, the language includes everything you can't define by yourself. In this sense, the standard library is part of “the language”.

Woodworking answered 4/12, 2011 at 2:47 Comment(7)
Thank you for you answer as it assuages my uneasiness with concurrency and TM. I felt like an idiot asking the Q because I took CS theory in college. But recently I have been reading up on quantum mechanics and hence where this question came from is the non-deterministic state of nature (ie the cat is dead and alive).Docilu
So does this mean that lambda calculus is not Turing-complete?Snowflake
@Snowflake The lambda calculus is Turing-complete: they have the same computational power as Turing machines. Concurrency can express things that sequential models of computation can't.Pontificate
If LC is Turning-complete, then shouldn’t we be able to model a Turing machine on it? And we can implement the parallel or in a Turning machine (just use a dual-tape machine which is equivalent to a normal machine). So we should be able to have our parallel or in LC, too, no?Snowflake
@Snowflake You can model a Turing complete on the lambda calculs. It's tedious, but not conceptually difficult. The reason this doesn't give you parallel or in the lambda calculus is that the translation is between whole computations on integer-like inputs and output. The translation does not preserve the internal mechanisms of the computation. Parallel or requires the lambda calculus to “inspect itself”. This direct reflexivity is not something that the Church-Turing thesis encompasses.Pontificate
@Snowflake I struggled to answer your question and looked for didactic help online. I found these slides by Andrés Sicard-Ramírez quite interesting. Even if you skip some of the slides that cover more advanced material, I suggest reading it through to get a better understanding of the last “Discussion” slides.Pontificate
I found them, too ;) I saved them up to read later since they seemed quite involved. Thanks!Snowflake

© 2022 - 2024 — McMap. All rights reserved.