When an algorithm solves a problem then, implicitly, it also provides
a proof that its answer is correct. This proof is, however, sometimes
much simpler (shorter, easier to inspect and check) then following the
whole algorithm. For example, checking whether a natural number
*a* is a divisor of a natural number *b* is easier than
finding a divisor of *b*. Here is an other example.
König's theorem says that in a bipartite graph, if the size of
the maximum matching is *k* then there are *k* points such
that every edge is incident to one of them (a *node-cover*).
There are several methods for finding a maximum matching, e.g., the
so-called Hungarian method, which, though polynomial, takes considerable time.
This method also gives a representing set of the same size as the
matching. The matching and the representing set together supply a
proof already by themselves that the matching is maximal.

We can also reverse our point of view and can investigate the proofs without worrying about how they can be found. This point of view is profitable in several directions. First, if we know the kind of proof that the algorithm must provide, this may help in constructing the algorithm. Second, if we know about a problem that even the proof of the correctness of the answer cannot be given, say, within a certain time (or space), then we also have a lower bound on the complexity of any algorithm solving the problem. Third (but not least), classifying the problems by the difficulty of the correctness proof of the answer, we get some very interesting and fundamental complexity classes.

These ideas, called * non-determinism* will be treated in several
sections. Ine way to formalize nondeterminism is to introduce *non-deterministic Turing machines*.
These machines are quite similar to those studied in Chapter 1, except
that at any time they are allowed to make one of several legal
moves. A computation consisting of legal moves will model the proof
that a certain computational result can be achieved. One should
emphasize that non-deterministic Turing machines do not serve as
models of any real computing device; we will see that these machines
are tools for the *formulation* of certain important classes of
problems rather than for their * solution*.

The *complexity of non-deterministic
algorithms * can be defined analogously to the complexity of
deterministic Turing machines, but also in a fashion more closely
following the idea that we are just trying to exhibit proofs
(certificates, witnesses) of the validity of the results.

The single most important non-deterministic complexity class is nondeterministic polynomial time, denoted by NP. Its relationship with the class P (deterministic polynomial time) is perhaps the most important unsolved problem in the theory of computing. This is the famous "P=NP" problem.

We go through a number of examples of problems in NP. These include the most basic problems in graph theory, optimization, number theory and algebra.

The class NP contains problems that are the most difficult problems in
this class, in the sense that every other problem can be reduced to
them by a polynomial time computation. These problems are called *NP-complete*. If we can show that
a problem is NP-complete, then it follows that the problem cannot be
solved in polynomial time, unless P=NP.

The first NP-complete problems (both historically and in our presentation) are problems in logic (the so-called Satisfiablity Problem), but the power of the theory comes from the fact that there is a large variety of other basic NP-complete problems in graph theory and other branches of mathematics: graph coloring, maximum clique, integer programming etc. In fact, most of the graph-theoretic decision problems that people have considered can be classified as either polynomial time solvable or NP-complete.