I am reading the book Computational Complexity: A Modern Approach and I am having problems understanding oblivious Turing machines.
An oblivious Turing machine (TM) is such a TM that the movement of its heads is entirely determined by the length of the input. That is, the TM is oblivious to its input. So far so good.
But one of the excercises is to prove the following theorem:
If a language L is decidable in time T(n)
then there exists an oblivious TM that decides L in time O(T(n)^2).
It is obvious that the oblivious TM must not operate on the original input of L
but at some coded version. That is, the gist of the theorem is the coding of a bitstring to an integer (length of the input of the oblivious TM). But if one would want to code the set of possible inputs of L
(bitstrings) to an integer, one would run into very high numbers fairly quickly since there are 2^n
bitstrings of length n
.
Am I understanding the problem correctly? How do you prove the theorem?