Saturday, February 12, 2011

The Problem With The Halting Problem

Of course everybody knows the halting problem is undecidable, what is less known is why the halting problem is undecidable. Surprisingly, I think the problem comes from the fact that the proof is actually too simple. It's really only a few sentences and doesn't even involve any big words or anything. You don't have to follow along with a pencil and paper, recomputing each step of the proof to see how it was done. You just start reading and before you know it it's over. You read and understand every single word but feel like you've been gypped. Something about it just doesn't feel right. It's one of the most profound proofs in the theory of computation, but you don't feel any different after reading it. The problem is that the simplicity is deceptive. If you don't take the time to limber up your mind for all the bending it has to do, you'll skip right over it. So before we get into the proof, let's start with some mind-stretching.

The proof goes something like this:
  1. Assume that the halting problem is decidable.
  2. ???
  3. Deduce that the universe doesn't exist.
Limber yet? Good.

Now we can address what you've been wondering this whole time; what the heck the halting problem is. The halting problem is just the problem of deciding whether a given program will eventually terminate or continue to run indefinitely on a given input. If it were decidable that would mean that for any given program on any given input, it would be theoretically possible to determine whether or not that program would halt on that input. Emphasis is of course on the fact that it would have to be decidable on any program and any input; solving trivial examples is obviously possible. Unfortunately the most obvious solution doesn't quite cut it. If you just run the program on the input and see if it halts, you would never know how long to let it run before deciding that it will never halt. The best you could do is wait a while and hope it halts, but if it didn't you still wouldn't know if it would eventually halt or not. Even with a time machine you could fast forward to the end of the universe and still never know if it really was going to loop forever or if it was just about to halt. If a solution exists it's going to involve a bit more cunning.

It may seem like it's really not even worth it to muster up the necessary cunning to solve the halting problem, but a solution would actually be incredibly valuable. Hard math problems like the Collatz conjecture would be trivially solvable; just make a program to do an exhaustive search of every number and give that program to the halting problem solver to see if it ever finishes. Done. There are a myriad of other undecidable problems that the halting-problem-solver would make short work of. Not to mention, how cool would it be to have a compiler that is able to warn you that the program you just compiled will never finish running. I know it would have saved me some debugging time.

Well, now that you're all excited about how awesome a solution to the halting problem would be, let's get to the reason it can never exist. Like we went over in the outline, step one of the proof is pretty simple, we just assume we have some program that can solve the halting problem. Let's call it Halting_Problem_Solver(p, i). Halting_Problem_Solver takes two parameters, the first is a program (p), and the second is some input (i). Halting_Problem_Solver returns true if the program p will ever halt on input i, and false if it will loop infinitely. Simple enough, right? Just make sure your mind is still limber for step two.

Now that we have a Halting_Problem_Solver, we're going to design a program which uses it to crash the universe. This is where it gets tricky. We'll call this second program Halting_Problem_Crasher(p). It takes one input, a program p. Halting_Problem_Crasher(p) is going to call Halting_Problem_Solver(p, p), which is to say it will see if program p will ever halt when given itself as input. Whatever Halting_Problem_Solver determines that p does with itself as its input, Halting_Problem_Crasher does the exact opposite. In other words, if it is determined that program p does eventually halt when given itself as input, Halting_Problem_Crasher enters an infinite loop. If it is determined that program p loops infinitely when given itself as input, Halting_Program_Crasher halts.

Here it is in some python-esque pseudo-code

Halting_Problem_Crasher(p):
  if(Halting_Problem_Solver(p, p) == Halts):
    while True: pass #infinte loop
  else:
    exit() #halt

Still with me? Ok, now what we're going to do is call Halting_Problem_Crasher(Halting_Problem_Crasher). That is, we're running Halting_Problem_Crasher with itself as input. Lets run through what happens. Halting_Problem_Crasher takes the program it's given and gives it to the Halting_Problem_Solver, using it as both of the parameters. So in this case, it calls
Halting_Problem_Solver(Halting_Problem_Crasher, Halting_Problem_Crasher)
So Halting_Problem_Solver is going to try to figure out if Halting_Problem_Crasher will loop infinitely or eventually halt with itself as input. But wait, that's what we were trying to figure out in the first place when we called
Halting_Problem_Crasher(Halting_Problem_Crasher)
So what should Halting_Problem_Solver determine? If it says it halts, Halting_Problem_Crasher loops. If it says it loops, Halting_Problem_Crasher halts. No matter what it says it's wrong. The only way it can work is by looping and halting simultaneously. The very existence of a solution to the halting problem proves that looping and halting are equivalent. And if looping and halting are equivalent, so are true and false, one and zero, up and down, etc. In other words, the existence of a solution to the halting problem would disprove the our very existence.

Wednesday, February 9, 2011

The Pumping Lemma, and Why It's Slightly More Important Than You Might Have Thought

A while ago I answered a question on The Stack about the pumping lemma. My answer was, of course, a raving success, earning an unprecedented twenty-three up-votes (ie. not very many). But what's important here isn't how awesome I am, but rather how awesome other people aren't. On a site populated with professional software engineers, the majority of which probably have degrees in computer science, I was surprised to see exactly how poorly the majority of people understand what what the pumping lemma is and what it has to do with anything. So, for the benefit of my vast readership (hi mom!), here is my explanation of the pumping lemma in layman's terms.

The pumping lemma is a simple proof to show that a language is not regular, meaning that a Finite State Machine cannot be built for it. The canonical example is the language (a^n)(b^n). This is the simple language which is just any number of as, followed by the same number of bs. So the strings

ab
aabb
aaabbb
aaaabbbb
etc. are in the language, but
aab
bab
aaabbbbbb
etc. are not.
It's simple enough to build a FSM for these examples:
This one will work all the way up to n=4. The problem is that our language didn't put any constraint on n, and Finite State Machines have to be, well, finite. No matter how many states I add to this machine, someone can give me an input where n equals the number of states plus one and my machine will fail. So if there can be a machine built to read this language, there must be a loop somewhere in there to keep the number of states finite. With these loops added:
all of the strings in our language will be accepted, but there is a problem. After the first four as, the machine loses count of how many as have been input because it stays in the same state. That means that after four, I can add as many as as I want to the string, without adding any bs, and still get the same return value. This means that the strings:
aaaa(a*)bbbb
with (a*) representing any number of as, will all be accepted by the machine even though they obviously aren't all in the language. In this context, we would say that the part of the string (a*) can be pumped. The fact that the Finite State Machine is finite and n is not bounded, guarantees that any machine which accepts all strings in the language MUST have this property. The machine must loop at some point, and at the point that it loops the language can be pumped. Therefore no Finite State Machine can be built for this language, and the language is not regular.

At this point you're probably asking yourself, "what does this have to do with anything and why should I care?" A fair question. The most immediate application of the pumping lemma is probably the problem of parsing HTML, or more generally, XML. As you may or may not remember, regular expressions and finite state machines are equivalent. That, along with the fact that HTML tags can be embedded within themselves deeper than Inception's dream levels, means that regular expressions can't parse HTML. In other words, regular expressions can't parse HTML. Which brings me to my next point: regular expressions can't parse HTML. Give me a regular expression for parsing HTML and I'll give you valid HTML that it is incapable of parsing. Seriously, I'm not busy.

Of course that probably doesn't seem like all that big of a deal, given all the freely available HTML parsers out there, but it turns out that the programmers who don't understand the pumping lemma are the same programmers who unnecessarily reinvent well established tools when faced with simple problems.