Annex C - Litreature Review

The number of possible permutations of the squares on a Rubik’s cube seems daunting. There are 8 corner pieces that can be arranged in 8! ways, each of which can be arranged in 3 orientations, giving 38 possibilities for each permutation of the corner pieces. There are 12 edge pieces which can be arranged in 12! ways. Each edge piece has 2 possible orientations, so each permutation of edge pieces has 212 arrangements. But in the Rubik’s cube, only 1 of 3 of the permutations have the rotations of the corner cubies correct. Only 1 of 2 of the permutations have the same edge-flipping orientation as the original cube, and only 1of 2 of these have the correct cubie-rearrangement parity, which will be discussed later. This gives: 4.3252 · 1019 possible arrangements of the Rubik’s cube. 

It is not completely known how to find the minimum distance between two arrangements of the cube. Of particular interest is the minimum number of moves from any permutation of the cube’s cubies back to the initial solved state. Another important question is the worst possible jumbling of the cube, that is, the arrangement requiring the maximum number of minimum steps back to the solved state. This number is referred to as “God’s number,” and has been shown (only as recently as August 12 this year) to be as low as 22.1 The lower bound on God’s number is known. Since the first twist of a face can happen 12 ways (there are 6 faces, each of which can be rotated in 2 possible directions), and the move after that can twist another face in 11 ways (since one of the 12 undoes the first move), we can find bounds on the worst possible number of moves away from the start state with the following “pidgeonhole” inequality (number of possible outcomes of rearranging must be greater than or equal to the number of permutations of the cube): 12 · 11n−1 ≥ 4.3252 · 1019 which is solved by n ≥ 19.


In a curious way there is place where logic, math and algorithms meet in a very specific way. It's called the Curry-Howard isomorphism.
First programming:
Suppose we have a statically types programming language like Java. We might have an object of type `X` defined by a line of code in a language like C# (say)We might also have a function taking arguments of type `X` and returning things of type `Y`.
As shorthand we can say `f` is of type `X -> Y`. The stuff before the arrow is the argument to `f` and the stuff after is the return type.
Given an object of type `X`, and a function of type `X -> Y`, we can make an object of type `Y` by applying `f` to `X`.

Second, logic:
Suppose `X` is a proposition we have proved. Suppose also we have managed to prove that `X` implies `Y`. We can write the implication as `X -> Y`. And now we have almost the same statement as before: given a proof of `X` and a proof of `X -> Y` we get a proof of `Y`.


Third, mathematics:

Suppose `x` is in some set `X`. Suppose we have a function `f:X->Y`. Then we can construct an element of the set `Y` simply by applying `f` to `X` giving `f(x)`.
Adapted from http://www.quora.com/What-are-the-relationships-between-logic-math-and-algorithms