An *algorithm* means a mathematical procedure serving for a
computation or construction (the computation of some function), and
which can be carried out mechanically, without thinking. This is not
really a definition, but one of the purposes of this course is to
demonstrate that a general agreement can be achieved on these matters.
(This agreement is often formulated as *Church's thesis*.) A
program in the Pascal (or any other) programming language is a good
example of an algorithm specification. Since the ``mechanical''
nature of an algorithm is its most important feature, instead of the
notion of algorithm, we will introduce various concepts of a {\em
mathematical machine}.

Mathematical machines compute some *output* from some
*input*. The input and output can be a word (finite sequence)
over a fixed alphabet. Mathematical machines are very much like the
real computers the reader knows but somewhat idealized. First, we omit
some inessential features (e.g. hardware bugs). Second, and more
important, is that they we remove limitations on the input size. As a
consequence, we cannot work with a single finite machine; we must
consider either an infinite family of finite computers of growing
size, or a single (idealized) infinite computer (for example, we add
an infinitely expandable memory). The latter approach has the
advantage that it avoids the questions of what infinite families are
allowed and how they are specified. One could think of this as
building larger and larger finite machines for larger and larger
tasks. Though the detailed structure of these machines are different,
one usually has some degree of uniformity over all sizes. For
theoretical purposes, it is desirable to pursue this uniformity to the
limit and to design a computer model infinite in size, accomodating
therefore tasks of arbitrary size.

Here is a typical problem we often solve on the computer: Given a
list of names, sort them in alphabetical order. The input is a string
consisting of names separated by commas: Bob, Charlie, Alice. The
output is also a string: Alice, Bob, Charlie. The problem is to
compute a *function* assigning to each string of names its
alphabetically ordered copy.

The simplest model of computation is a *finite automaton*. In a sense finite
automata model every conceivable device, but in a very inefficient
way. A fixed finite automaton, when used on inputs of arbitrary size,
can compute only very primitive functions, and is not an adequate
computation model in itself. The example problem (sorting) above
cannot be solved by a fixed finite automaton. We could design an
automaton that solves the sorting problem for a fixed list length, but
the size of the automaton would be so huge that the whole contruction
would be meaningless.

Historically, the first ``infinite'' model of computation was the
*Turing machine*, introduced by the
English mathematician A. Turing in 1936, before the invention
of programable computers. This model can be thought of as a finite
automaton (control unit) equipped with an infinite storage (memory).
More exactly, the memory is an infinite one-dimensional array of
cells. The control unit can make arbitrary local changes to the
scanned memory cell and change the scanned position by one step.

All computations that could ever be carried out on any other mathematical machine-models can also be carried out by Turing machines (this empirical fact is often called the ``Church Thesis''). In fact, Turing machines are in a sense the simplest machines that are universal in this sense. This fact makes them very suitable for theoretical investigations of computability questions. At the same time, they are less appropriate for the description of concrete algorithms since their specification is awkward, and they differ from real-life computers in several important aspects.

The most important weakness of the Turing machine in comparison
real computers is that its memory is not accessible immediately: in
order to read a distant memory cell, all intermediate cells must also
be read. This is remedied by the notion of a *Random Access Machine* (RAM). The RAM
can reach an arbitrary memory cell in a single step. It can be
considered a simplified model of real world computers, along with the
abstraction that it has unbounded memory and the capability to store
arbitrarily large integers in each of its memory cells. The RAM can
be programmed in some specified but arbitrary programming language.
For the description of algorithms, it is practical to use the RAM
since this is closest to real program writing. But we will see that
the Turing machine and the RAM are equivalent from many points of
view; what is most important, the same functions are computable on
Turing machines and on the RAM.

Unfortunately, these nice properties of the RAM machine come at a cost: in order to be able to read a memory register immediately, we must address it; the address must be stored in another register. Since the number of memory registers is not bounded, the address is not bounded either, and therefore we must allow an arbitrarily large number in the register containing the address. The content of this register must itself change during the running of the program (indirect addressing). This further increases the power of the RAM.

(By not bounding the number contained in a register, we are moved again away from existing computers. If we do not watch out, ``abusing'' the operations with large numbers, we can program on the RAM algorithms that can be implemented on existing computers only in a much less efficient way. We'll come back to this issue when discussing time and space bounds in Chapter 3.)

*Boolean (logical) circuits*
provide a model of computation that is not equivalent with the Turing
machine. There are two ways of looking at a Boolean circuit: either as
the purely logical description of bit-by-bit operations during the
computation performed by any computing device, or - in a sense at the
other end of the scale - a very explicit computing device that models
the working of real-life micropocessors quite closely.
In this first sense, a Boolean circuit is a very general model. However,
a given Boolean circuit allows only a given size of input,
and in this way, it can solve only a finite number of problems. It will
be evident, that for a fixed input size, every function is
computable by a Boolean circuit. If we restrict the size of the circuit,
however, then the difference between problems Boolean circuits and
Turing-machines or the RAM will not be that essential. Since the
structure and functioning of Boolean circuits is the most transparent and
tractable among all computation models, they play very important role
in theoretical investigations (especially in the proof of lower bounds
on complexity, see Chapter 13).

Boolean circuits, on the other hand, are important elements of
modeling real-life hardware. If a clock is added to a Boolean
circuit, we obtain a device called *clocked circuit* that is equivalent
to a finite automaton (but with a description that can be used to
build a physical realization of it). Memory registers can also be
modelled by clocked circuits, and if these are added to a Boolean
circuit we obtain a machine model that is very close to computers
today.

One of the simplest models of an infinite machine is to connect
an infinite number of identical automata into an array. This way we
get a *cellular automaton*. Cellular
automata are universal computing devices just as Turing machines; at
the same time, they produce interesting phenomena such as the
well-known ``Game of Life'' (reproduced on many screen-savers). With
progress in parallel computation, as well as in biology, cellular
automata become more and more important.