I've read that most programming languages can be parsed as a context free grammar (CFG). In term of computational power, it equals the one of a pushdown non deterministic automaton. Am I right?
Technically yes. Usefully, no.
There are at least two useful ways to think about these questions:
- If you're thinking of a set of strings, you have a language.
- If you're thinking about an algorithm to decide whether a string is or is not in a language, you have a decision problem.
The difficulty is that while most programming languages have an underlying structure that is easily described by a context-free grammar (Tcl being an interesting exception), many sentences that are described by the context-free grammar are not actually "in the language," where by "in the language" I mean "a valid program in the language in question." These extra sentences are usually ruled out by some form of static semantics. For example, the following utterance is a sentence in the context-free grammar of C programs but it is not itself in the set of valid C programs:
int f(void) { return n + 1; }
The problem here is that n
is not in scope. C requires "declaration before use", and that property cannot be expressed using a context-free grammar.
A typical decision procedure for a real programming language is part of the front end of a compiler or interpreter, and it has at least two parts: one, the parser, is equivalent in decision power to a pushdown automaton; but the second does additional checks which rule out many utterances as invalid. If these checks require any kind of definition-before-use property, they can't be done by a pushdown automaton or context-free grammar.
If it's true, then how could a CFG hold an unrestricted grammar (UG), which is turing complete?
A CFG doesn't "hold" anything—it simply describes a language.
... even if programming languages are described by CFGs, they are actually used to describe turing machines, and so via an UG.
You're skipping past some important levels of indirection here.
I think that's because of at least two different levels of computing, the first, which is the parsing of a CFG focuses on the syntax related to the structure ( representation ? ) of the language, while the other focuses on the semantic ( sense, interpretation of the data itself ? ) related to the capabilities of the programming language which is turing complete. Again, are these assumptions right?
They seem a bit muddled to me, but you're on the right track. A key question is "what's the difference between a language and a programming language?" The answer is that a programming language has a computational interpretation. Computational interpretations come in many fine varieties, and not all of them are Turing-complete.
But the magic is in the interpretation, not in the syntax, so the Chomsky hierarchy is not very relevant here.
To prove my point, an extreme example: the regular language [1-9][0-9]*
is Turing-complete under the following interpretation:
- The SK-combinator language is Turing complete.
- There are only countably many SK programs.
- They can easily be enumerated uniquely and deterministically.
- Therefore we can associate each positive integer with an SK program.
- If we interpret a sequence of digits as a positive integer in the standard way, we can equally well interpret the same sequence of digits as an SK program, and moreover, any SK program can be expressed using a finite sequence of digits.
Therefore the language of integer literals is Turing-complete.
If your head doesn't hurt now, it should.
if
,while
, program blocks in general) are defined entirely at the semantic level. That is, they're standard library functions, nothing more. (The flip side of this is that it is really easy to embed foreign syntaxes inside Tcl programs, provided they're parenthetically balanced. Virtually everything is…) – Priddy