For example, are there certain things when writing an operating system that cannot be accomplished in a turing complete language?
No. or at least if you found one that would be a disproof of the Church Turing Thesis.
However, there are languages that are Turing complete but in which it is a complete pain in the ass to write certain programs, i.e., string processing in FORTRAN, numerical programming in COBOL, integer arithmetic in sed, practically everything in x86 assembler.
And then of course there's brainfuck.
Yes, you can have a language that does not allow you to manipulate the hardware directly. It would be hard to write an operating system using the Bourne shell, for example. But these limitations are less than you think. Operating systems have been written in Standard ML, Scheme, and even Haskell!
Turing-complete is most general formal definition of completeness. There are language features that are necessary for certain applications, but these don't fit into the formal definitions.
For example, graphics capabilities, ability to spawn background processes, ability to persist state, and ability to connect to the network are all useful features, but don't relate to a language's Turing-completeness. So Python on Google App Engine or a Java applet running in a sandbox is still Turing-complete.
You'll notice that in many cases, these types of features are provided by libraries, rather than the core language.
If you're speaking of pragmatics, then certainly... you can imagine a programming language with no ability to read or write files (e.g. a language that can compute any function on integers but that's it)... Just because a language can't operate my toaster doesn't mean it's not Turing-complete, but it does mean there are things it cannot do, so I am not sure how 'important' or useful this distinction is.
Depending on context, "accomplishing something in a language" means different things to different people. Turing is one of those people, and he defined very precisely what he means by "complete".
If a language (or a theoretical machine) is Turing complete, there are no Turing computations it cannot do. This does not imply that the language is omnipotent, just that it is good at sums. There are many "things" which are not Turing computations, and which therefore a Turing-complete computer may not be able to do.
"Be an operating system" is not a Turing computation. To be an operating system, you need to do more things than just computations. You also need to manipulate hardware.
Given a Turing complete language, together with a set of operations for doing all the hardware manipulation that you need, including a suitable concept of input and of time, you can write an OS. Or I should say, it is possible to write an OS. Whether you personally can do it depends on how easy the language is to work with, and on physical limitations which the Turing definition ignores, such as whether the resulting program would actually fit and execute in the memory of the device you want it to operate, and run in a realistic time.
Ignoring practical limitations though - yes, you can write an OS in any Turing complete language, provided that the language also has sufficient operations to drive the hardware. "Library calls", if you wish to take the practical CS approach of distinguishing language from libraries. Turing made no such distinction, because his computing model doesn't have the concept of a "call" anyway. You also need to solve the bootstrap problem - either your hardware must directly run the language you're writing in, or else you need a compiler into a language which the hardware does run, or else you need an interpreter written in a language which the hardware runs. Again, Turing ignores all this because his computation model uses abstract hardware, whereas writing an OS is all about the hardware.
In English (rather than CompSciSpeak) it is common to say that a language "lacks certain features", and arguably that it is therefore "not complete" by comparison with another language which has them. One might counter-argue that it is possible to implement closures in C. One can for example write a C program which is a Lisp interpreter, and embed in it a Lisp program as a string. Voila, closures in C. However, this is not what most people are asking for if they say, "I wish C had closures". If you think every language needs closures, then C is incomplete. If you think every language needs structured programming, then ARM assembler is incomplete. If you think it should be possible to dynamically add methods to an object, then C++ is incomplete, even though it's perfectly possible to write a C++ class with "AddMethod" and "CallMethodByName" methods and fake up your own method calling convention from there. And so on.
Turing doesn't think languages need any of these conveniences: they don't affect what computations can be performed, just how easy it is to write certain programs. The concept of Turing completeness has nothing to say about what programs look like, or how they're organised, only what they output. So those languages are Turing complete, but from the programmer's point of view there are certain things which cannot be accomplished in those languages.
Language can or can not do things like - subroutines, recursion, custom data types, loops, defining classes, goto, etc. Set of such language features makes it complete or not. For example language is incomplete if you don't have loops, gotos and subroutines - you can not perform any cyclic operation. Language completeness is very theoretic and scientific thing. Like, for example, it's proven than even if your language does not allow to call functions recursively, but allows function pointers - you can simulate recursion, i.e. with y-combinator.
Stuff like working with files and hardware very often is not a part of language. To accomplish any programming task you need more than language. You need environment your program works in. In x86 as language you have instruction "int" with single parameter, but it is up to OS to perform certain operations when you execute "int 21h".
Language just needs some way to communicate with environment and be complete - then it's up to environment to work with it. It's valid to write in x86 asm "mov ax,bx" but it is up to your CPU to perform this operation.
Being incomplete in some other way - easy, just define your own version of completeness. i.e. I hate to work without class-based OOP, so let's define that language is not Feldman-complete if it does not have language features supporting class-based OOP. Ok then, C and Javascript is not F-complete. You still can do anything in those languages, and even simulate class-based OOP to some level.
Regarding OS - you still need processor that runs instructions and compiler that translates language into CPU instructions. I can imagine compiler translating anything to machines codes. Like, following is valid JS code
for(var i=0;i<10;i++){
mov("ax",i);
int(0x21);
}
and it should not be that hard to compile it into machine code.
In modern world you choose not only language, you also choose platform, standard and non-standard libraries, literature, community, support, etc. All those things affect your ability to do certain thing, and they taken altogether may be or may be not "complete enough" for your task. But if I can not compile c++ code to java applet, it does not mean it is a c++ language incompleteness. It's just absence of compiler that will transform c++ code to something that can be loaded by JVM.
If you want, you can design a language with language features like "OpenFile, PingNetworkHost, DownloadMpeg4FileOverHttpsAndPlayBackwards". But if the language does not have them, they can still be implemented with other language features and environment support, so absence of such features does not make language incomplete. If your language is like C but without goto operator, loop operator (for,while,do while) and functions, then you can not write cyclic program and no environment and libraries can help you.
The answer is a most definite yes. Turing completeness only implies that a Turing complete language can be used to compute any computable function. For one, it doesn't say anything about the complexity of a computation.
You can usually expect that any polynomial time algorithm can be expressed as a polynomial time algorithm in any other Turing complete language, but that's about it. Especially any real time requirements (soft or hard) go out of the window, if your only focus is Turing completeness.
Another important thing is expressiveness of a language, which is largely a subjective property, but you can appreciate that programs are much harder to write in any machine code than, say, Java.
As for operating systems, an interface to the hardware is a must, but any language can be fitted with such utilities.
[Edit] Another thing I might add is that no actual implementation of any programming language is Turing complete by the nature of our finite computing machines. While the Church-Turing thesis, along with related seminal discoveries (like the halting problem), lay a foundation to our understanding of computing, they rarely meet the world of practical computing.
When the talk is about languages, it is usually assumed that the languages run on some very simple machines. As such any concept of reading from a file or accessing the network is not normally considered in regards to the power of a language.
There is different classes of languages that are often used in computability theory (each with nearly endless modifications)
- Finite automaton. This is the simplest class of machines and they can recognize the smallest class of languages, which happens to be the exact languages that regular expressions can recognize, ie. whether a string begins with two 'a's and end with d. They cannot be used to recognize whether a string contains a balanced set of parentheses, fx.
- Push down automaton. This is essentially an extension of Finite Automatons with a stack. Unlike finite state machines, they can be used to tell whether a particular string contains a balanced set of parentheses. The languages that push down automatons can recognize is exactly the the set than can be described using context free grammars and they are often used to parse source code.
- Turing Machines. These are the most powerful of the class of machines here, but that doesn't mean that they can be used to answer all questions. They are incapable of answering the question does this string describe a Turing machine that will run for ever? In fact any a Turing machine cannot tell anything about the non-trivial properties of any Turing machine (Rice Theorem).
So yes a Turing machine has limitations, and there are classes of machines that can do something a Turing machine cannot, but they have (all will in all probability) only ever existed in theory, newer in practice.
I don't think definitions of completeness other than Turing (or regular expressions, or push-down automata) are relevant for languages. But this completeness is only with respect to the numeric or symbolic computing facilities.
The things you are mentioning seem to me to be more a function of the runtime and environment than the language. There's an important distinction, and formal notions of completeness typically only apply to languages themselves.
© 2022 - 2024 — McMap. All rights reserved.