Until the 1930's, it was the consensus among mathematicians --- mostly not spelled out precisely --- that every mathematical question that can be formulated precisely, can also be solved. This statement has two interpretations. We can talk about a single yes-or-no question (say: is every planar graph 4-colorable? is every even integer larger than 2 the sum of two primes?), and then the decision means that it can be proved or disproved from the axioms of set theory (which were, and still are, generally accepted as the axioms of mathematics). This belief was destroyed by the the Austrian mathematician Gödel, who published a famous result in 1931. According to this, there are perfectly well formulated mathematical questions that cannot be answered from the axioms of set theory.
Now one could think that this is a weakness of this particular system of axioms: perhaps by adding some generally accepted axioms (which had been overlooked) one could get a new system that would allow us to decide the truth of every well-formulated mathematical statement. Gödel proved that this hope was also vain: no matter how we extend the axiom system of set theory (allowing even infinitely many axioms, subject to some reasonable restrictions: no contradiction should be derivable and that it should be possible to decide about a statement whether it is an axiom or not), still there remain unsolvable problems.
The second meaning of the question of decidability is when we are concerned with a family of problems and are looking for an algorithm that decides each of them. In 1936, Church formulated a family of problems for which he could prove that they are not decidable by any algorithm. For this statement to make sense, the mathematical notion of an algorithm had to be created. Church used tools from logic, the notion of recursive functions, to formalize the notion of algorithmic solvability.
Similarly as in connection with Gödel's Theorem, it seems quite possible that one could define algorithmic solvability in a different way, or extend the arsenal of algorithms with new tools, allowing the solution of new problems. In the same year when Church published his work, Turing created the notion of a Turing machine. Nowadays we call something algorithmically computable if it can be computed by some Turing machine. But it turned out that Church's original model is equivalent to the Turing machine in the sense the same computational problems can be solved by them. We have seen in the previous chapter that the same holds for the Random Access Machine. Many other computational models have been proposed (some are quite different from the Turing machine, RAM, or any real-life computer, like quantum computing or DNA computing), but nobody found a machine model that could solve more computational problems than the Turing machine.
Church in fact anticipated this by formulating the so-called Church Thesis, according to which every ``calculation'' can be formalized in the system he gave. Today we state this hypothesis in the form that all functions computable on any computing device are computable on a Turing machine. As a consequence of this thesis (if we accept it) we can simply speak of computable functions without referring to the specific type of machine on which they are computable.
(One could perhaps make one exception from the Church Thesis for algorithms using randomness. These can solve algorithmically unsolvable computational problems so that the answer is correct with large probability. Cf. Chapter 7 on Information Complexity.)
Let us restrict our attention to yes-or-no decision problems. Such a problem can be identified with the set (or language) of its positive instances. If the question is decidable by a Turing machine, this language is called recursive. We also introduce the notion of recursively enumerable languages. These are just slightly more general than recursive, but we shall be able to find a recursively enumerable but non-recursive language: in other words, a simply stated but algorithmically undecidable problem. (The idea behind the definition of recursively enumerable languages, namely non-determinism, will play a central role in the theory of computational complexity.)
The first questions that can be proved to be undecidable are self-referential in nature (and thereby a priori suspect of foul play): for example, the halting problem, the question whether a given Turing machine, with a given input, will halt in a finite number of steps or not. The problems of logic that turn out to be algorithmically undecidable (see below) are also self-referential (or appear simply too ambitious). We might think that mathematical problems occurring in ``real life'' are decidable. But it turns out that quite a few other algorithmically unsolvable problems in mathematics (number theory, algebra, topology) that are not at all self-referential, have no logical character at all.
Finally, we discuss Gödel's Undecidability Theorem, and show that Church's and Gödel's undecidability results are very closely related.