dvgrn wrote:Sounds exciting! Am I guessing right, that you're using the simplifying assumption that the target object will always go back to a block after each construction phase?
The compiler will have no concept of 'target object'.
It will look at your 100-object input constellation and see if there's a pair of nearby objects* on the south-east envelope which can be simultaneously constructed from a single object. If so, the problem is automatically reduced to constructing a 99-object constellation.
* Since all my recipes are currently of the form (block|beehive|tub) --> Y + block, then one of these objects must be a block. But the putative compiler isn't relying on this principle, and adding some cheap X --> Y + Z recipes in the 'splitters' directory will just give it freedom to use those as it sees fit.
Of course, direct object splitting isn't always possible (e.g. if there are no blocks in the constellation, and no X --> Y + Z recipes where neither Y nor Z is a block). If not, it will look at individual objects which are not blocks, and see if any of them can be constructed from a single block. (If so, the complexity has again reduced.)
Another simplification it will try to accomplish is replacing a recognised complex object (eater2, snark, etc.) with a user-defined slow-salvo seed provided in a 'seeds' directory. In this case, you've increased the total number of objects (an eater2 seed might have 5 simple objects, for instance), but decreased the complexity measured by the ordered pair (complex objects, simple objects) -- where pairs are ordered lexicographically.
Provided it can reduce the complexity of any constellation, it can construct it from a single block by induction.
Now, this reverse approach is far more general than the 'sequentially construct objects by moving a hand block around' approach. In particular, draw a tree where leaves are objects you want in the final constellation, and nonleaves are intermediate targets, and the root of your tree is the initial target. Then a sequential construction compiler is forced to produce construction trees that look like this:
((((((( A, B), C), D), E), F), G), H)
which are maximally imbalanced. Contrariwise, a reverse compiler can equally well produce a synthesis with construction tree:
(((A, B), (C, D)), ((E, F), (G, H)))
Also, it is not difficult for the reverse compiler to dump entire solutions (say, for constructing a semi-Snark from a single block) back into the 'seeds' directory. So once it's constructed a semi-Snark once, then it will remember how to construct a semi-Snark in a single step on a subsequent run of the compiler. This will accelerate compile time enormously for complex patterns made of simpler components.
For large sprawling patterns, the complexity measure might need to include (say) the total length of the minimum spanning tree. Otherwise, the compiler wouldn't know how to construct two blocks separated by 1000 units. So, to summarise, the compiler will try each of the following, in decreasing order of priority:
- Build any complex user-defined seeds;
- Construct a pair of nearby objects from a single object;
- Construct a single object from a simpler object;
- Construct a single object from an object of equal complexity, in such a way as to reduce MST length.
It would be fun to watch the resulting construction
played forwards. It seems that this strategy would yield constructions which do the following:
- Start with a single block, splitting it and moving the children in opposite directions;
- Keep splitting sub-blocks recursively, moving them along the edges of a Steiner tree, until we get a family of blocks which (when zoomed out by several factors of two) will resemble the target pattern.
- Bombard these blocks with salvos of gliders to turn them into the target circuitry.
Of course, it won't be strictly in this order, since it needs to be careful about synthesising distant objects prior to nearer objects. But this consideration is automatically incorporated into the compiler's reverse approach, anyway.