Merging search programs
-
- Posts: 319
- Joined: January 24th, 2018, 4:36 pm
- Location: The hertzsprung gap
Merging search programs
I had an idea for a search program, a combination of lifesrc and gfind/zfind.
It would do a certain amount of calculations in gfind/zfind and then enter the partial into lifesrc. after that same amount of calculations, it would input the partial back into gfind/zfind and repeat.
Does anyone know how to go about doing this?
It would do a certain amount of calculations in gfind/zfind and then enter the partial into lifesrc. after that same amount of calculations, it would input the partial back into gfind/zfind and repeat.
Does anyone know how to go about doing this?
Look at me! I make patterns in golly and go on the forums! I wanna be Famous!
Re: Merging search programs
I think Bellman-LLS and zfind-LLS seem more useful (particularly the former if properly implemented).
I personally think that combining a (potentially heuristic) tree-pruning script with a SAT solver exponentially reduces the complexity of the problem and tackles separate sub-problems fully.
Another possibility but in this case more unapproachable would be a glider-synthesis discovery script that would separate inputs into predecessor islands and then tackling each with a SAT solver.
I personally think that combining a (potentially heuristic) tree-pruning script with a SAT solver exponentially reduces the complexity of the problem and tackles separate sub-problems fully.
Another possibility but in this case more unapproachable would be a glider-synthesis discovery script that would separate inputs into predecessor islands and then tackling each with a SAT solver.
-
- Posts: 319
- Joined: January 24th, 2018, 4:36 pm
- Location: The hertzsprung gap
Re: Merging search programs
another possiblity would be using an sat solver with zfind-lifesrc
Look at me! I make patterns in golly and go on the forums! I wanna be Famous!
Re: Merging search programs
Oversimplifying massively, we have:
- qfind = parallelise(gfind + zfind)
- ikpx = parallelise(gfind + LLS)
What do you do with ill crystallographers? Take them to the mono-clinic!
-
- Posts: 319
- Joined: January 24th, 2018, 4:36 pm
- Location: The hertzsprung gap
Re: Merging search programs
I don't know how gfind works, or how to use SAT solving for a search program, but If someone can tell me, I have enough coding skill that I think I could make something.
Look at me! I make patterns in golly and go on the forums! I wanna be Famous!
-
- Posts: 1334
- Joined: July 1st, 2016, 3:58 pm
Re: Merging search programs
How do commands work for ikpx? Can you give an example search?
I manage the 5S project, which collects all known spaceship speeds in Isotropic Non-totalistic rules. I also wrote EPE, a tool for searching in the INT rulespace.
Things to work on:
- Find (7,1)c/8 and 9c/10 ships in non-B0 INT.
- EPE improvements.
Things to work on:
- Find (7,1)c/8 and 9c/10 ships in non-B0 INT.
- EPE improvements.
-
- Posts: 319
- Joined: January 24th, 2018, 4:36 pm
- Location: The hertzsprung gap
Re: Merging search programs
Can anyone explain to me in detail how gfind works?
Look at me! I make patterns in golly and go on the forums! I wanna be Famous!
Re: Merging search programs
Seconded. I've read the paper and I understand the bit where the rows of the search space are cleverly reordered. But I don't understand how that helps.googleplex wrote:Can anyone explain to me in detail how gfind works?
Re: Merging search programs
Have you read the comments at the beginning of the source code, and then Googled anything about de Bruijn graphs that was unfamiliar? You seem to be asking for a detailed essay about gfind, but that kind of thing is hard to write without knowing what kind of detail the audience is looking for. Specific questions might get better results.googleplex wrote:Can anyone explain to me in detail how gfind works?
(I'm definitely not claiming to be one of the people who understands gfind, but the existing code comments seem like a reasonable place to start.)
The paper Macbi mentions is the Searching for Spaceships article (I'm assuming).
EDIT: My very vague understanding of the search method so far is
BFS = breadth-first search = hold all possibilities in memory at once, and keep adding more possibilities to the list, removing options only when it's proven they can't possibly work.
That has a pronounced tendency to run out of memory -- too many possibilities to keep track of all at once.
DFS = depth-first search = check each possibility in turn until all possibilities have been checked. You don't need much memory at all for this, but on the other hand, searches ambitious enough to actually find a spaceship will usually never end. The search will look really thoroughly at the first few possibilities, but the odds are there's nothing there, and it will spend so much time trying all the options that it never actually gets to the next branch.
Combined BFS / DFS: hold all possibilities in memory at once, to start out. When that starts to fill up memory, do a depth-first search of each possible partial spaceship, but only down to some specific depth so each of these searches is easy to complete. A lot of the "possibilities" will turn out to be impossible before that search depth is reached, so we can throw them out and not spend any more time or memory on them.
-- I think maybe the problem this solves is something like along these lines:
Let's say the first branch of the search tree has a million possible partial spaceships in it, but they are all going to prove to be impossible by (say) depth 20. If you try to do a breadth-first search to depth 20, you'll never get there, because you'd have to store in memory most of those million partials and a similar number from umpteen thousand other branches of the search tree that have similar dead ends.
But it actually doesn't take very long to run through those million partials and prove that none of them are any good -- and once you've done that, you can prune off that entire branch. So then you're going to waste a lot less time shuffling millions or billions of useless partials around in memory, and you can then search a lot deeper on the branches that (you now know for sure) go at least as deep as depth 21.
Re: Merging search programs
I performed the following experiment. I cut out an nxn square at the bottom right of Sir Robin, and I asked glucose to complete it. (I used my own Perl script to create the SAT problem, and I used Knuthian encoding). Here are timings for various n:
In all cases I asked for all solutions; in all cases I only got one solution.
I would love it if someone could try out the same experiment using wls or jls or ntzfind or some other program; it would be very nice to compare performance.
And this also shows there is no "shorter tail" that fits strictly within the 15x15 lower right of Sir Robin.
Timings should only be considered relative to each other; this was an old server with many other programs
running concurrently.
I'm not brave enough to attempt 16x16.
Code: Select all
7 0:01.19
8 0:02.46
9 0:23.82
10 1:29.90
11 5:44.08
12 22:46.95
13 1:11:45
14 3:58:04
15 18:01:37
I would love it if someone could try out the same experiment using wls or jls or ntzfind or some other program; it would be very nice to compare performance.
And this also shows there is no "shorter tail" that fits strictly within the 15x15 lower right of Sir Robin.
Timings should only be considered relative to each other; this was an old server with many other programs
running concurrently.
I'm not brave enough to attempt 16x16.
Re: Merging search programs
I'm happy to do this for JLS. Before I run the experiment I'd like to clarify the initial conditions. I'm also not sure if this is the best thread, but it will do for now. My understanding of how you set up the search:rokicki wrote:I would love it if someone could try out the same experiment using wls or jls or ntzfind or some other program; it would be very nice to compare performance.
- Use phase from OP in elementary knightship thread as phase 0 (travelling 2 cells up and 1 cell left every 6 gen)
- Determine cell states for all phases
- Define NxN box in lower right (See image below for 7x7)
- Set cells in all phases of this box to unknown
- Run search to find all solutions
Assuming these parameters, here are a few results for small N (one solution found in all cases):
Code: Select all
7 0:0.03
8 0:0.45
9 0:2.04
10 0:30.90
11 3:21.10
The 5S project (Smallest Spaceships Supporting Specific Speeds) is now maintained by AforAmpere. The latest collection is hosted on GitHub and contains well over 1,000,000 spaceships.
Semi-active here - recovering from a severe case of LWTDS.
Semi-active here - recovering from a severe case of LWTDS.
Re: Merging search programs
Oh, interesting! So it looks like JLS was pretty comparable. When I ran my tests on a 16-core box, I had 32 other threads running, so each run was about half a logical core.
It would be nice to nail this down. I can run this exact search under more controlled conditions (idle box) and I can run JLS as well; would you be willing to send me JLS files set up appropriately? Also can I run JLS in strictly batch mode (with no window, since I have neither keyboard nor monitor on my servers)? Else I'll just run things on my Mac.
Conversely I'd be happy to send the .cnf files to anyone (but be aware they are huge, as they include the entire Sir Robin in all generations; I see 19MB files with 941K lines . . .)
It would be nice to nail this down. I can run this exact search under more controlled conditions (idle box) and I can run JLS as well; would you be willing to send me JLS files set up appropriately? Also can I run JLS in strictly batch mode (with no window, since I have neither keyboard nor monitor on my servers)? Else I'll just run things on my Mac.
Conversely I'd be happy to send the .cnf files to anyone (but be aware they are huge, as they include the entire Sir Robin in all generations; I see 19MB files with 941K lines . . .)
Re: Merging search programs
Right, so for N=11 the performance is roughly the same. I've run N=12 and that took longer at 29:15, so JLS definitely seems to be lagging behind as N increases (in line with the trend up to N=11).rokicki wrote:Oh, interesting! So it looks like JLS was pretty comparable. When I ran my tests on a 16-core box, I had 32 other threads running, so each run was about half a logical core.
Here's a JLS file which you can use for N up to 20. It only contains the last 30 rows and has a 7x7 box already cut out. I'm fairly certain that the lack of the frontend in the search arena won't affect the search time. There's no way to run the existing JLS in batch mode, it only has a GUI.rokicki wrote:It would be nice to nail this down. I can run this exact search under more controlled conditions (idle box) and I can run JLS as well; would you be willing to send me JLS files set up appropriately? Also can I run JLS in strictly batch mode (with no window, since I have neither keyboard nor monitor on my servers)? Else I'll just run things on my Mac.
I haven't yet tried to compile iglucose for Windows so I'm not set up to do the comparison on this machine.rokicki wrote:Conversely I'd be happy to send the .cnf files to anyone (but be aware they are huge, as they include the entire Sir Robin in all generations; I see 19MB files with 941K lines . . .)
Edit: Oops, I just noticed the sort order was different from what I wrote, I was experimenting with sorting from the middle column (of the cut out box) outwards. For some reason left to right seems to have slightly better performance, not sure why. I've updated the attachment
The 5S project (Smallest Spaceships Supporting Specific Speeds) is now maintained by AforAmpere. The latest collection is hosted on GitHub and contains well over 1,000,000 spaceships.
Semi-active here - recovering from a severe case of LWTDS.
Semi-active here - recovering from a severe case of LWTDS.
Re: Merging search programs
Regarding compiling iglucose for Windows, you need to remove -lz from the makefile (which is fine, because Tom stripped out the zlib support so that it can use named pipes).
It suffices to just pull the latest metasat repository, which includes a Cygwin-compatible distribution of iglucose.
It suffices to just pull the latest metasat repository, which includes a Cygwin-compatible distribution of iglucose.
What do you do with ill crystallographers? Take them to the mono-clinic!
-
- Posts: 319
- Joined: January 24th, 2018, 4:36 pm
- Location: The hertzsprung gap
Re: Merging search programs
My current ideas for a search program:
WLS using metasat for deep lookahead?
WLS using metasat for deep lookahead?
Look at me! I make patterns in golly and go on the forums! I wanna be Famous!
Re: Merging search programs
Here are the timing results for comparison of JLS with rokicki's SAT solver for Sir Robin's Tail completion. I only tested up to N=14 which took more than 9 times longer. For up to N=13 I also tested with different sort orders (still row wise from top to bottom, but with different starting points (right to left (R-L) and middle outwards (Mid) - for even N I used the right hand central column). All times in seconds. The ratios aren't meaningful individually because of the different test conditions, but the trend with N shows a clear difference between JLS and SAT in how the search performance scales with N (for this particular problem).
The sort order does affect the search time for JLS, but not in a consistent way. I had thought that the L-R sort order might hurt JLS because of the empty columns in the one solution, but this turned out not to be the case. In fact L-R performed best for all N except 12
----
@calcyman: Thank you for the hints. I'm not using cygwin for anything else so I'll probably try to get it iglucose compiled natively, but if I have problems I'll follow your advice.
Code: Select all
N Tom (SAT) JLS (L-R) JLS/SAT
7 1.19 0.03 0.025
8 2.46 0.45 0.18
9 23.82 2.04 0.085
10 89.9 30.9 0.34
11 344.1 192.1 0.56
12 1367 1755 1.28
13 4305 13272 3.08
14 14284 133942 9.38
15 64897
Code: Select all
N Time
L-R R-L Mid
9 2.0 3.3 3.4
10 30.9 40.7 37.8
11 192.1 224.6 199.4
12 1755 1968 1618
13 13272 18508 16089
@calcyman: Thank you for the hints. I'm not using cygwin for anything else so I'll probably try to get it iglucose compiled natively, but if I have problems I'll follow your advice.
The 5S project (Smallest Spaceships Supporting Specific Speeds) is now maintained by AforAmpere. The latest collection is hosted on GitHub and contains well over 1,000,000 spaceships.
Semi-active here - recovering from a severe case of LWTDS.
Semi-active here - recovering from a severe case of LWTDS.
Re: Merging search programs
My point is that the iglucose source code in the metasat repository removes an unnecessary dependency from the makefile, so it's strictly easier to compile [on any platform].
What do you do with ill crystallographers? Take them to the mono-clinic!
-
- Posts: 319
- Joined: January 24th, 2018, 4:36 pm
- Location: The hertzsprung gap
Re: Merging search programs
Cygwin is really slow. I find that it's faster to buy a $35 raspberry pi 3 and a $20 iUniker heatsink and fan.
I have a length 44 partial tail, found in 28 hours.
I have a length 44 partial tail, found in 28 hours.
Look at me! I make patterns in golly and go on the forums! I wanna be Famous!
-
- Posts: 319
- Joined: January 24th, 2018, 4:36 pm
- Location: The hertzsprung gap
Re: Merging search programs
To enhance lifesrc with an sat solver, we could use the sat solver to determine which cells are more important than others.
Look at me! I make patterns in golly and go on the forums! I wanna be Famous!
- Apple Bottom
- Posts: 1034
- Joined: July 27th, 2015, 2:06 pm
- Contact:
Re: Merging search programs
That doesn't really make a lot of sense -- the OS (or emulation layer, in Cygwin's case) shouldn't make a difference for CPU-intensive tasks.googleplex wrote:Cygwin is really slow. I find that it's faster to buy a $35 raspberry pi 3 and a $20 iUniker heatsink and fan.
I could see Windows being a *little* slower than a Unix box not running X, as Windows (probably) is more CPU-intensive itself in the background; and the C/C++ compiler may well make a difference, assuming your program of choice doesn't use assembly for the tight inner loops to begin with, but--
--if a Raspberry Pi is faster than even a modest PC, then something's clearly not right with the latter, no matter which OS you're running.
If you speak, your speech must be better than your silence would have been. — Arabian proverb
Catagolue: Apple Bottom • Life Wiki: Apple Bottom • Twitter: @_AppleBottom_
Proud member of the Pattern Raiders!
Catagolue: Apple Bottom • Life Wiki: Apple Bottom • Twitter: @_AppleBottom_
Proud member of the Pattern Raiders!
Re: Merging search programs
It's because ikpx is OS-intensive if the SAT problems are easy (and CPU-intensive if the SAT problems are difficult). It spawns an iglucose instance for each problem it encounters, and communicates with it through named pipes. If the SAT problems are easy, then iglucose will finish within a split-second, so spawning the process will be more costly than actually running the SAT solver.
What do you do with ill crystallographers? Take them to the mono-clinic!