You and a friend both want to play white in a game of Chess. You have a coin, so to settle this dispute you agree to a coinflip. Heads you win,
Tails you don't. But there's a problem - you don't know if this coin is *fair*. That is, you don't know if the probability of Heads is 1/2.
This isn't a contrived situation. In fact, most coins have different engravings on each side (hence "Heads" and "Tails"), meaning the weights of both sides
will differ, so the probability distribution won't be uniform.

So the problem becomes,
can we **simulate** a fair coin-toss using tosses of a possibly unfair coin? Take a moment (or longer) to try to solve this.

Let's call the probability that the coin comes up Heads on a toss (). Then tails will have probability . Consider the following algorithm:

- Toss the coin twice.
- If we get heads first, then tails (HT) then we say heads won
- If we get tails first, then heads (TH) then we say tails won
- If we get two heads or two tails (HH or TT), then start over.

The key here is that the outcome HT has probabilty of occuring, which is equal to the probability
that TH is the outcome of the two tosses (probability ). So we're able to simulate a fair toss using an unfair coin. Cool isn't it?
Of course, it isn't *perfect* - we start the algorithm over whenever we get HH or TT. This is a type of random algorithm known as a Las Vegas algorithm. It's runtime complexity is unbounded; who knows how long
it'll take before we get an HT or TH. For this reason these algorithms are always analyzed for their *average* runtime complexity, not worst case. But I digress.

Let me get to the point. I'm a math student, and throughout my time as an undergraduate, I've often found myself in a seemingly paradoxical situation. The professor will introduce to us a famous fundemental theorem, and then assign its proof as homework. This seemed strange to me; if the theorem was so important, how could it be easy enough to prove that a typical math student could solve it in a weekly homework? The answer is that mathematics is often more about asking the right questions as opposed to coming up with the right answers. Perhaps the typical view someone has of an important mathematician is someone who solves long standing open problems. But in reality, the most important mathematicians are the ones that find meaningful and interesting questions to ask, and coming up with these questions is often the hardest part.

The algorithm above answered the question, "how do we simulate a fair coin with a possibly unfair coin?". But who asked (and answered) this question? None other than John Von Neumann in 1951. I presented this problem because it's my favourite example of how coming up with the question is the hard part. The solution isn't difficult; every semester, thousands of computer science students are assigned this problem for homework. So why did it take until 1951 and a genius's genius like Von Neumann to come up with this simple but brilliant trick? Because in math, it often takes a genius to ask what a layman could answer.