Optimizing Golly for big work?

For general discussion about Conway's Game of Life.
Post Reply
User avatar
Tezcatlipoca
Posts: 81
Joined: September 9th, 2014, 11:40 am

Optimizing Golly for big work?

Post by Tezcatlipoca » September 19th, 2014, 8:01 pm

So I've been working with shapeloop in Golly with loops that evolve. Lots of fun. I have been watching patterns evolve for a week now and selecting for interesting ones. Excitingly, recently things have started to balloon in scale to the point where Golly really bogs down. The population is 24,363,367, the active portion of the grid is ~145,000 x 145,000. I am using it unbounded, but trimming the edges as someone suggested. Am I running up against a hard wall here, or are their things to do to optimize the number of generations Golly can perform in any given time. I do not mind so much the animation in the intervals. I'm running on a laptop, i7 @ 2.2GHZ with 16GB of RAM Windows 64bit, nVidia 640M graphics card. I notice the "steps" go way up there. there's an 8^20. I don't notice much of an improvement going from 8^3 and 8^4 and 8^5, something like and increase of generations in the thousands over a minute (not all that much when one is dealing on this scale) and I haven't pushed it beyond 5 because it seems to freeze. I also notice that for some reason, when I set hyperspeed on overnight, it often freezes and accomplishes very little, so I'm reluctant to push it much farther as it is. Any advice on the subject will be most appreciated.

User avatar
Andrew
Moderator
Posts: 933
Joined: June 2nd, 2009, 2:08 am
Location: Melbourne, Australia
Contact:

Re: Optimizing Golly for big work?

Post by Andrew » September 19th, 2014, 8:27 pm

The hashlife-based algorithms (ie. all algos except QuickLife) struggle when patterns get more and more chaotic. The freezes you are seeing are probably when Golly is doing garbage collection. Turn on Control > Show Hash Info to see some messages about what Golly is doing.

The only thing you can do to improve the performance of the RuleLoader algo is to give it more memory. Go to Preferences > Control and set the maximum memory for RuleLoader to 12000 MB (a good value for your 16GB system). There's no point trying to set the max memory much higher than that -- your entire system is likely to grind to a halt.

You might also want to to set the default base step to 2 rather than the default 8. That will give you finer control over changing the step size.
Use Glu to explore CA rules on non-periodic tilings: DominoLife and HatLife

User avatar
Andrew
Moderator
Posts: 933
Joined: June 2nd, 2009, 2:08 am
Location: Melbourne, Australia
Contact:

Re: Optimizing Golly for big work?

Post by Andrew » September 19th, 2014, 8:55 pm

A couple more tips I just thought of:

Avoid turning on the Hyperspeed option. That's really just a cute little feature to demonstrate the power of the hashlife algo when you know a pattern is highly repetitive.

Try ticking Edit > Disable Undo/Redo. That should speed up your edge trimming edits and free up more memory for hashlife. But be careful using that option -- you can still restore the starting pattern but you won't be able to return to later generations (unless you save them in a file). My strong advice is to enable undo/redo as soon as you've finished an experiment.
Use Glu to explore CA rules on non-periodic tilings: DominoLife and HatLife

User avatar
Tezcatlipoca
Posts: 81
Joined: September 9th, 2014, 11:40 am

Re: Optimizing Golly for big work?

Post by Tezcatlipoca » September 19th, 2014, 9:27 pm

Good tips. Thanks for considering. And yeah, I can see what you mean about the trade off of accidentally having undo redo disabled when you need it desperately. Maybe what I'll do is turn it off over night and leave a note on my computer to disable it in the morning. The thing about this pattern is not that it's complicated. It's very regular as far as loops go, but it's huge. It's kind of weird to me that it stalls. Maybe this hash info will tell me something interesting. Where exactly do you see that once it's enabled, by the way? Oh, and do you know anything about the buffering feature? Would that be animation buffering?

User avatar
Andrew
Moderator
Posts: 933
Joined: June 2nd, 2009, 2:08 am
Location: Melbourne, Australia
Contact:

Re: Optimizing Golly for big work?

Post by Andrew » September 20th, 2014, 12:05 am

Tezcatlipoca wrote: Maybe this hash info will tell me something interesting. Where exactly do you see that once it's enabled, by the way?
The 2nd line of the status bar. See Help > Control Menu for more details.
Oh, and do you know anything about the buffering feature? Would that be animation buffering?
See Help > View Menu.
Use Glu to explore CA rules on non-periodic tilings: DominoLife and HatLife

User avatar
dvgrn
Moderator
Posts: 10685
Joined: May 17th, 2009, 11:00 pm
Location: Madison, WI
Contact:

Re: Optimizing Golly for big work?

Post by dvgrn » September 20th, 2014, 11:49 am

Tezcatlipoca wrote:I notice the "steps" go way up there. there's an 8^20. I don't notice much of an improvement going from 8^3 and 8^4 and 8^5, something like and increase of generations in the thousands over a minute (not all that much when one is dealing on this scale) and I haven't pushed it beyond 5 because it seems to freeze.
One small note [EDIT -- Oops, I see Andrew has already mentioned this also]: in cases like this I tend to set the base step size to 2 instead of 8. If you push up the step size one power of two at a time, you can pretty quickly figure out what the optimal step setting is for the pattern you're simulating.

The sky's the limit -- you can go as far above 8^20=2^60 as you want, if the pattern is regular enough; try something like Patterns/Hashlife/metapixel-galaxy.mc.gz at 2^100. But for the patterns you're running, the optimal setting is probably somewhere between 2^10 and 2^20. With too high a step size, there are just too many different hashtiles. Memory fills up, and then Golly has to spend lots of time garbage collecting -- throwing away huge hashtiles that it doesn't see in the current universe. That's what is happening during the long freezes, as Andrew says.

Those tiles might well show up again eventually, of course, if you run your pattern long enough, so ideally it would be nice to be able to keep them all and not have to do those big simulations all over again the next time around. But the problem is basically a combinatorial explosion: even if you had ten or a hundred times as much RAM, you'd see the exact same problem just a few step sizes higher.

The real problem is even bigger than that. In your experiments, you're looking for evolutionary behavior -- the emergence of complexity from simple initial conditions, or something along those lines. Golly's Hashlife is a handy exploration tool, but it gets its speed by cheating -- finding underlying regularities that are too subtle to see easily, and using those regularities to take shortcuts.

Real complexity is something that Hashlife can't handle very well at all -- other algorithms are much better. Try running a large "random soup" pattern in Hashlife vs. Quicklife in just about any rule, for example; Hashlife is much slower, at least until larger-scale emergent patterns start to appear.

In other words, as soon as you find the emergent complexity you're looking for, Golly is guaranteed to stop working! If Golly's simulation rate suddenly slows down, that might be as good a sign as any that something new and interesting has appeared.

Post Reply