My research and investigations

A Web Interface for HOL

HOL output has a lot of math symbols, which are encoded in Unicode. Displaying these math symbols in DOS when running HOL in Windows is not satisfying. To solve this problem, I developed a simple Java server that talks to HOL in the background, and I can use the web-browser to talk to the server. With such a system I can easily get the math symbols in the browser, and also enhance the display of HOL input/output with colors.

The code for my little work is in

This is also my first experience with Bitbucket, a free source code hosting based on Mercurial, a distributed version control system. There is a nice short tutorial by Joel Spolsky, which explains clearly and simply how to use Mercurial.


Running Linux in Windows

With VirtualBox, it’s pretty easy to run another operating system as long as you have a modern PC: plenty of memory, plenty of hard disk space.

However, it is still a great pleasure to see that this really works. VirtualBox uses hardware virtualization, making it a Hypervisor. Hence it is able to simulate the entire operating system, down to the connecting devices.

I’ve installed Virtual Box 4.0, and run a VM based on Ubuntu 10.10. I worked with Unix terminals long time ago, so this “Desktop Linux” is a new experience for me. Such is the advance of technology.

As a Java developer, I like the way that Java is already included in Linux. There is also the GCC compiler, so there’s a lot of programming tools I can work with.

Both VirtualBox and Ubuntu comes with heaps of features. These new toys will keep me busy for weeks!

The HOL System Tutorial is a fairly gentle introduction to using the system HOL. It provides screenshots from the interactive sessions, so you can follow what’s happening even if you don’t have the system installed. Better still, download and install HOL, then follow the tutorial to verify the sessions and get a hands-on experience with the system.

The first substantial example is a proof of this theorem: that there are infinitely many primes. The classic proof by Euclid is considered as a proof from “The Book” — in which God keeps the most elegant proof of each mathematical theorem, according to the mathematician Paul Erdös. During a lecture in 1985, Erdös said, “You don’t have to believe in God, but you should believe in The Book.”

However, to rephrase this proof in HOL, many basic concepts needs to be established first: what is a prime? how to define divisibility? All these boil down to logical statements about natural numbers.

The actual proof process in HOL is assisted by the Proof Manager. There are actions to put a goal to the manager, expand the goal by tactics, which are strategies to pick either known facts or lemma from a HOL theory. The expansion of a goal usually gives subgoals, which can be further expanded until everything is known, upon which the initial goal is proved. Of course, picking the correct strategies take some understanding and experience. The Proof Manager allows you to advance a step, backup a step, or redo from the start. All these techniques are demonstrated in this example of the Tutorial.

In the end, Euclid’s proof of the infinity of primes is done in about 10 lines of HOL code, after much polishing. This is finale of a whole chapter with more than 30 pages!

ML as a Functional Language

The challenge for any programming language X which claims to be general-purpose is this: can X compile itself? More specifically, can language X be used to write a compiler that compiles itself?

Most functional languages can perform this trick: John McCarthy’s Lisp in Lisp is a classic. Nowadays, there are Scheme in Scheme compilers.

Therefore it is no surprise to find an ML compiler in ML, specifically an SML compiler in SML. This is the mission for MLKit, even from version 1 in 1993. The project was to put the Definition of Standard ML (1990) into an actual implementation in SML. Since then, the 1990 Definition has been revised, and MLKit has evolved into version 4 with better and better compilation and implementation. This is a nice example of the expressive power of ML.

At first, the goal of MLKit was to understand the ML language itself. Both compilation and implementation are coded with clarity, rather than efficiency. However, this understanding provide the basis of other efficient ML implementations, including Moscow ML and Caml.

As a programming language, ML is quite powerful: it features both functional and procedural styles.

ML is born a functional language — indeed the very first implementation of ML is in Lisp, the grand-daddy of functional programming. Like Lisp, ML has a top-level read-evaluate-print loop, making it interactive. However, ML is not an interpretative language: the top-level interactive loop is actually running an incremental compiler. Most ML implementations will compile ML into some form of lambda expressions. Possible ways to deal with these lambda expressions are:

  1. Interpret the lambda expressions.
  2. Compile the lambda expressions into byte-codes of some abstract machine, and run the abstract machine.
  3. Convert the abstract machine byte-codes into native machine codes, to be run directly in real machines.

In pure functional programming, data are only transformed, never modified, hence no side-effects. This simplifies reasoning about program correctness. However, real-world applications need to interact with the world (e.g. response to mouse clicks) — which depends on side-effects. Procedural programming is largely based on data changes, the most explicit being the use of pointers in C/C++ to modify memory locations directly.

Intended for building practical applications (like theorem provers) ML provides procedural features such as assignments and references. As a result ML shares many goodies of mainstream programming languages, e.g. use of modules and separate compilation.

However, even with these many features, ML is not a bloated language. It has a well-thought-out design, just using typed lambda calculus. The evolution of ML is a story about how to implement functional languages efficiently, and how to achieve performance comparable to procedural languages.

Learning ML

I start to play with ML, the MetaLanguage for theorem-provers. It is developed by Robin Milner and others in the early 1970s at the University of Edinburgh. It is now standardized as SML, the Standard ML.

A good ML Tutorial is this:
Tips for Computer Scientists on Standard ML (Revised) by Mads Tofte (2009), which describes all of Standard ML in 20 pages.

To try out ML, I install Moscow ML by Sergei Romanenko. The Windows installer is very convenient. It runs in command-line mode, which is interactive enough for trying out ML.

For a few lines, typing is OK. For many more lines, an editor is much preferred. I’m using TextPad, and I’m able to locate an SML syntax file for TextPad.

Poly/ML by David Matthews is also good for learning ML. It is Windows-based, and the documentation includes a short tutorial on programming Windows in Poly/ML. Sounds exciting.

I can run ML scripts with both Moscow ML and Poly/ML. The Poly/ML window even has copy/paste — so I can copy ML snippet from tutorials and paste to the window for execution.

The language ML is rich with features, so it may take some time to know it well.

The story starts here.
G: Let the first one introduce himself.

B: George Boole (1815-1864), from England. Did you read my book?
X: Which book?
G: An Investigation of the Laws of Thought of 1854, a classic. I read it just before I logged off.

G: You have another model for GF(2)?
B: Yes, and it isn’t based on numbers.

G: Spell out the details, please.
B: It is based on simple sentences, those you can tell immediately whether it is true or false.

G: This is straight from your book!
B: I also consider simple ways to combine such sentences, only the words and, or, called connectives.

G: That’s two symbols, two operations. Interesting.
B: The connectives can join the simple sentences in 8 possible ways:

Sentence Computations  
Or false true
false false true
true true true
And false true
false false false
true false true

X: Exactly how do you compute with sentences?

G: Think logically. I’m here and you’re here: true or false? I’m here or I’m there: true or false?
X: I see. When connected by and, both parts must be true to have true finally. When connected by or, both parts must be false to be false finally.

G: That’s Boole’s Laws of Thoughts!
B: And it gives a model of GF(2) if we use this dictionary:

Boole’s proposed model of GF(2)  
Symbol Meaning
0 false
1 true
Operation Meaning
+ logical or
× logical and

X: That’s brilliant! A symbolic model for a symbolic system!

G: Not quite. This model is wrong!
G: I spot the error: in your model: true or true is true. Translate by the dictionary: 1+1=1, which is not correct in GF(2).

B: Oops! How embarrassing! I made a boolean error!
G: That’s the lesson. We need to check carefully to ensure all dictionary translations are perfect.

B: Wait a second. I can fix my error!
G: Don’t make the same mistake twice.

B: No, I won’t. Instead of or, use exclusive or, shortened to xor. I just redo all the computations:

Sentence Computations  
Xor false true
false false true
true true false
And false true
false false false
true false true

X: What is xor, or exclusive or?
G: It means either this or that, but not both. Tomorrow either the sun will shine or rain will fall. If tomorrow is foggy or a rainbow appears, my prediction is wrong.

B: And the Boole model of GF(2) is:

Boole model of GF(2)  
Symbol Meaning
0 false
1 true
Operation Meaning
+ logical xor
× logical and

G: That’s an impressive quick fix!
G: There are further check … it’s all OK, I declare the Boole model of GF(2) is verified correct.

X: Did you just invent a logical operation to fit into the model?
B: No. It’s all in my book. Using either-or to join sentences is rather common.

G: Now the second model. Your name please.
E: Euclid of Alexandria (300 BC) from Greece. Excuse me for my dates as I’ve lost count.

G,G,B: (bowing) We all read your book.
X: Which book?
G,G,B: (angry looks) Ssh!

E: My classic text Elements was the bible of geometry for more than 20 centuries, until the rise of modern math.
G: Your method of doing math is still being used in all math textbooks.

E: I try to catch up with modern math, so I’ve been doing a bit of abstract algebra lately.
G: I suppose you have a model of GF(2) based on geometry?

E: Take a non-symmetric alphabet, say the letter E in my name. There are two ways you can play with it: either leave it, or reflect it to ?.
B: See, Euclid is using my xor!

E: The first task, which is doing nothing, I call it identity. The second task, which is flipping over, I call it reverse.
B: I can see that.

E: Given two tasks, you can either do both of them, or just do the first one and ignoring the second one.
B: Another xor. First is diligent, while the second is lazy.

E: There are 8 ways to combine the tasks with the do-both or do-one operations:

Task Computations  
Do Both identity reverse
identity identity reverse
reverse reverse identity
Do One identity reverse
identity identity identity
reverse identity reverse

E: This gives a model of GF(2) using the following dictionary:

Euclid’s proposed model of GF(2)  
Symbol Meaning
0 identity
1 reverse
Operation Meaning
+ Do -Both
× Do-one

X: Another impressive feat!

G: Look carefully ¡X this model is wrong!
G: If your Do-one means to do the first one, then the Do-one table has an error: assume the verticals denote the first, we should have Do-one(reverse,identity)=reverse, not identity as shown.

E: Oops, I may not be so good in algebra!
B: We do have to check thoroughly,

E: But I think I can fix this ….. Yes, I can. Instead of Do-one meaning to do the first one, let’s change it to do the easy one, assuming the task reverse is harder than the task identity, which is doing nothing.
B: This gives the revised 8 computations:

Task Computations  
Do-them-both identity reverse
identity identity reverse
reverse reverse identity
Do -easy-one identity reverse
identity identity identity
reverse identity reverse

E: And with this dictionary we surely have a model of GF(2):

Euclid model of GF(2)  
Symbol Meaning
0 identity, the easy task
1 reverse, the hard task
Operation Meaning Name
x+y do x, do y. Do-them-both
x×y do easy(x,y) Do-easy-one

G: Let me do further check …. they’re OK. I declare the Euclid model of GF(2) verified correct.

G: This Euclid model inspire me to have still another model of GF(2).
X: Another model?!

G: Sure. This one is based on numbers, but not modulo 2.
G: Let hear the details.

G: Take the two numbers +1 and 1. Symbolic + is numerically multiply, and symbolic × is take the maximum.
G: This is very interesting. Let me compute:

Numeric Computations  
Multiply +1 -1
+1 +1 -1
-1 -1 +1
Maximum +1 -1
+1 +1 +1
-1 +1 -1

B: And use this dictionary for the model of GF(2):

Gauss model of GF(2)  
Symbol Meaning
0 +1
1 -1
Operation Meaning
x+y multiply(x,y)
x×y maximum(x,y)

G: Further checks are OK. This is indeed Gauss model of GF(2).

X: How to get this model from Euclid model?
G: When I see that reversing twice give identity, I recall (1)(1)=+1. Euclid used the easier task to pick the identity result in his multiplication, so I try the lesser of 1 and +1. It doesn’t work, but reverting to the bigger of 1 and +1 works.

G: Very good indeed. This inspires me to yet another model of GF(2)!
X: Still more?

G: Yes. Take the symbols 0 and 1 as numbers, but let + to mean difference, and × to mean minimum.
X: What is difference?

G: That’s subtraction without sign: always taking the smaller from the bigger.
X: Oh, I see. Let me do the computations:

Numeric Computations  
Difference 0 1
0 0 1
1 1 0
Minimum 0 1
0 0 0
1 0 1

G: To match GF(2), the dictionaries for symbols and operations are:

Galois model of GF(2)  
Symbol Meaning
0 Number 0
1 Number 1
Operation Meaning
x+y difference(x,y)
x×y minimum(x,y)

G: That’s excellent.
G: Further checks are OK in this case. This is my Galois model of GF(2).

B: It is thrilling to see so many models of GF(2).
X: Is there a most suitable one among the many choices?

G: That’s a good question. It’s too late. We’ll call it a day.
X: Yes, next time.

Y: This is way too late.

X: I learn too many models of GF(2) in one day. I’m exhausted.