I receive a 3x3 Wooden Puzzle Cube as swag.
It takes on average of something like 2 to 3 random attempts to get all the pieces into the box. There is no obvious pattern to any of the solutions.
Sometimes I solve it on the first go and feel all clever about myself.
But to solve it like a nerd, I must do so with a computer program (written in Java), that computes all possible combinations, and executes quickly.
The basic idea is to have a Cube object (3x3x3 Integer array) and keep trying to fill it with Pieces in all their possible orientations. Any time one fits, I add that partially filled Cube (a copy to be exact) to a queue. I keep taking Cubes off the queue, and keep adding Pieces until I either have a correct solution (which I store), or a reject (which I throw away).
This strategy is simple to implement, but tends to be slow to transverse the problem, and produces a lot of duplicate solutions. It’s better to “cheat” in a number of important ways in order to complete quickly:
- Space priority
Force the Cube must have its internal space filled up in a specific order. This optimisation cuts down duplicates by rejecting a large number of valid solutions from every sub-tree, as we know they can always be reached in other ways.
- Non-rotating Piece
A specific Piece (namely the “S”) is chosen to go through fewer transformations. This means for all solutions it is essentially “locked” in place (i.e. only moving to the centre or the edge, and never rotating), resulting in fewer duplicate sub-tree searches.
- Depth first search
Storing the entire graph in memory is prohibitively expensive, which is why a depth first search strategy is preferable. Cube nodes with fewer Pieces remaining take higher priority than their emptier peers, meaning lower parts of the tree are quickly executed and discarded. Despite the huge size of the problem space, the queue averages around 500 in-memory Cube search nodes.
- Piece Transformation Caching
Generating transformations for each Piece is expensive (I picked up this tip from the Java Profiler: “JVM Monitor”). It’s much more efficient to cache all the transformations for each shape in memory at the start, and just reject the options that don’t fit.
- Duplicate Filtering
As we have a “Non-rotating Piece” and each solution is a Cube of “Piece.TYPE” value, it’s easy to deep compare each solution and remove the remaining duplicates.
My result: 222 possible solutions (ignoring by convention both duplicates and rotations), calculated in 18 seconds.
Code is written in Java, and available in one of my GitHub repositories.
This puzzle is not a true Soma Cube (which has 240 solutions), as the pieces provided are of different shapes.
I’m not going to show my working (bad student!), but the computational complexity (worst case) of my algorithm is:
(a) Without caching (i.e. generating each possible 3D piece on the fly):
(b) With caching:
n - width/length/height of Cube (hence lots of my for-loops have cubic complexity).
p - number of pieces.
t - number of types of piece (where t < p).
Neither approach scales very well if you want to output ALL possible solutions, though the depth first search approach does work well when all you want to find is ONE solution to the problem (with all solutions having equal merit).
Optimisations such as Space priority (saves memory), and non-rotating piece (duplicate prevention) do not affect the complexity calculation but shave several orders of magnitude off the real world performance. Having to filter duplicates from the result is a sign that the search algorithm itself can be improved.