## My research and investigations

### Theorem Proving in AI

The knowledge of math is built upon theorems, and mathematicians are involved in the study, reworking, and perhaps inventing new paths for proofs. Starting from assumptions, or axioms of a theory, a proof consists of a chain of logical deductions leading to the theorem. If you open any modern textbook on math, you’ll find a dense exposition of definitions, lemmas, propositions and theorems, with short and long proofs.

Lemmas and propositions are stepping stones in the logical steps leading to the theorems. For humans, it is much easier to understand a process bit by bit, and in most cases these intermediate steps are significant by themselves, leading to other paths of investigation. Following a long argument is tedious, much like reading the long source code of a program. If there are any bugs lurking, they are hard to find.

Tedious but mechanical work is most suitable for computers. From the start, AI community had been designing programs that can do math like a freshman, e.g. Geometry Theorem Prover (1958) and Symbolic Automatic Integration SAINT (1961). These are math proofs as understood by the general math community.

However, there is another level of math proof that’s the aim of purists: proving from the ground up. Here, the ground is the foundation of all math: basic set theory. This task of putting all math on such a foundation, supplemented by detailed logical inferences, was proclaimed by David Hilbert in 1900. Much of the actual work was being done by a group of mathematicians, anonymously called the Bourbaki, from 1930s to 1980s.

Hilbert’s ambition is to reduce all math into a symbol manipulation game – the logical symbols are being manipulated by logical inference rules. This is called formalization of math. The first attempt to take up Hilbert’s challenge is Russell and Whitehead’s Principia Mathematica (1910s). In 1955, Allen Newell and Herbert Simon created the program Logic Theorist, which can prove some theorems in Principia Mathematica. Hope was high that soon a machine will prove all known math theorems.

The reality is: formalized proofs are long, with vast number of steps. While computers can do the mechanical work, some guidance from human is better to get the work done in reasonable time. Besides, human assistance can provide the stepping stones in the very long chain of deductions, directing the computer’s progress towards the goal.

Nowadays, there are several Theorem Provers developed by various research groups. The December 2008 issue of Notices of AMS (see below) is devoted to Formal Proofs. While this is not for the faint-hearted, at least you can peek the actual computer outputs from a theorem prover.

References:

### What is AI

Nowadays it is much quicker to search for answers on the Web. A search on the subject Artificial Intelligence (AI) will be flooded with answers, with diverse viewpoints but few consensus.

Rather than repeating what others say, I’ll illustrate AI with a little example: simple arithmetic.

Once a child learn enough arithmetic to understand that 1/2 = 0.5, he or she will try the next: 1/3. Converting this into decimal gives: 0.333… The child will soon realize that this is an never-ending job. Most likely you can’t trick him/her by saying, “maybe it will terminate if you go far enough.” The child can see through and offer an explanation as why it won’t terminate. That’s math discovery for the child — and that’s intelligence, the natural kind.

Any good programmer can code a short program to do the same thing: converting unit fractions to decimals. A naive loop will print 1/3 as 0.333… forever! Of course, a smart programmer can put in some “intelligence” to the program, making it “realize” that the digit(s) repeat, thereby prints a message and terminate. That’s intelligence, the artificial kind.

The central debate for AI is this: can human intelligence be always coded into software, i.e. put into machines? Both yes-camp and no-camp have followers. For a balanced view, have a read on Wikipedia.

AI has a long and twisted history, with hopes and despairs, rather like a soap opera. Nonetheless, it is making progress, with a wide range of applications.

Our brains and our machines both compute. They have different makeups, use different algorithms, etc., but are they different in smartness?