Is there a way to manipulate Golly's Undo/Redo setting from a Perl script?
In Golly 2.1, some scripts can generate GiBs of undo info, eating disk space and slowing down the script. I've just written such a script.
It would be useful to be able to save the state and disable Undo at the start of the script, and restore it when the script exits.
Speed optimization, automation of golly perl scripts
Re: Control Golly's Undo/Redo setting from Perl?
No, mainly because I don't think it would make much sense. All the changes made by a script are meant to be "atomic"; ie. after the script runs you want to be able to undo all those changes in one go (think about a script like invert.py that might change millions of cells, or the many generation changes made by goto.py).Is there a way to manipulate Golly's Undo/Redo setting from a Perl script?
I'm guessing your script has lots of g_run() calls mixed up with lots of editing calls? If so, put a g_new() call at the start of your script -- that will prevent lots of temporary files being created and the script will run faster. This solution assumes your script is creating a pattern rather than editing an existing pattern.In Golly 2.1, some scripts can generate GiBs of undo info, eating disk space and slowing down the script. I've just written such a script.
From the undo history's point of view there are 2 types of scripts:
1. A script that calls new() or open() is assumed to be creating some sort of pattern. When Golly sees a new() or open() command it clears any undo history and sets an internal flag that says "don't mark this layer as dirty and don't bother recording any further changes".
2. A script that doesn't call new() or open() is assumed to modify the current pattern. Golly must record *all* the script's changes so that when it finishes you can select "Undo Script Changes" from the Edit menu.
This is a useful tip that I should mention somewhere in Help > Perl/Python Scripting, probably in the section on potential problems.
If adding a new() call doesn't fix the problem, or if you can't use it because your script is of type 2, then describe the script in more detail (or post it here) and I'll see if I can think of some other solution.
Speed optimization, automation of golly perl scripts
Exactly.Andrew wrote:I'm guessing your script has lots of g_run() calls mixed up with lots of editing calls?
Perfect, that did the trick!Andrew wrote:If so, put a g_new() call at the start of your script -- that will prevent lots of temporary files being created and the script will run faster. This solution assumes your script is creating a pattern rather than editing an existing pattern.
The script sets up streams ships (or any mobile pattern). I know there are a couple of similar Python scripts but I'm a Perl guy.
The script supports multiperiod streams specified by a comma-separated list, thus the need for optimization. Regular period streams are fast to generate even using a trivial implementation, but looping 10000+ times over a multiperiod stream gets slow.
Since making the original post, I tried several implementations, and a clear winner emerged from the benchmarks: get the multiperiod group into a cellarray, then loop using g_evolve to advance the entire group and g_putcells to place it in the output layer.
Here are the benchmark details (times in seconds):
The obvious implementation using g_putcells and g_run in a loop is blazingly fast using HashLife for cases like single ships whose period aligns well, but becomes far too slow on multiperiod streams.Testbed: Golly 2.1 on a Core 2 Duo @1.86GHz, 32-bit Ubuntu
Test case: LWSS
Period: 100,101,100
Number: 10000
Using Quicklife:
g_evolve: 18
work layer, g_run: 324
work layer, basestep, g_step: 337
Using Hashlife:
g_evolve: 184
work layer, basestep, g_step: 400
work layer, g_run: 416
work layer, g_run, hyperspeed: 435
(The benchmark and implementation info is for the benefit of other new scripters.)
Thanks to Andrew, and as a result of the benchmarks, these lines were added to the start of the script:
Code: Select all
g_new("layer title");
g_setalgo("QuickLife");
As it's not linked with Perl, bgolly isn't suitable for my automation needs since it's not scriptable. Also, it appears to be much slower than golly.
In addition to the special golly-start.pl, a script file can be specified on the command line. This is good, and preferable in my case.
Once I finish writing the golly scripts needed to do the survey, I have some Perl tools which will automate the overall survey process. This will involve starting golly for each step. Compared to what golly will be doing each time, the overhead of restarting it will be negligible.
There are two issues when using Golly from the command line in an automated scripting environment:
- A way to cleanly shut down Golly from within a script. Using exit(0) is the workaround, so this is a minor issue.
- A command line argument to start minimized, so that it doesn't pop up in your face if you're doing something else on the machine.
Re: Speed optimization, automation of golly perl scripts
Have you done any actual timings? I'd be very surprised if bgolly was slower than golly -- they both share the same generating code. If anything, bgolly should be faster because it doesn't have to do any periodic checks for user events.As it's not linked with Perl, bgolly isn't suitable for my automation needs since it's not scriptable. Also, it appears to be much slower than golly.
Good suggestions -- I'll add them to the TODO list, but they probably won't make it into 2.2 (I'm struggling to find time to finish the bounded universe changes).There are two issues when using Golly from the command line in an automated scripting environment:
- A way to cleanly shut down Golly from within a script. Using exit(0) is the workaround, so this is a minor issue.
- A command line argument to start minimized, so that it doesn't pop up in your face if you're doing something else on the machine.