About Archive Tags RSS Feed

 

Slaughter is at the cross-roads

1 November 2011 21:50

There are many system administration and configuration management tools available, I've mentioned them in the past and we're probably all familiar with our pet favourites.

The "biggies" include CFEngine, Puppet, Chef, BFG2. The "minis" are largely in-house tools, or abuses of existing software such as fabric.

My own personal solution manages my home network, and three dedicated servers I pay for in various ways.

Currently I've been setting up some configuration "stuff" for a friend and I've elected to manage some of the setup with this system of my own, and I guess I need to decide what I'm going to do going forward.

slaughter is well maintained, largely by virtue of not doing too much. The things it does are genuinely useful and entirely sufficient to handle a lot of the common tasks - and because the server-side requirement is a HTTP server, and the only client-side requirement is CRON it is trivial to deploy.

In the past I've thought of three alternatives that would make it more complex:

  • Stop using HTTP and have a mini-daemon to both serve and schedule.
  • Stop using HTTP and use rsync instead.
  • Rewrite it in Javascript. (Yes, really).

Each approaches have their appeal. I like the idea of only executing GPG-signed policies, and that would be trivial if there was a real server in place. It could also use SSL because that's all you need for security (ha!).

On the other hand using rsync allows me to trivially implement the only missing primitive I actually miss at times - the ability to recursively download and install a remote directory tree. (I solve this problem by downloading a .tar file and unpacking it. Not good. Doesn't cope with template expansion and is fiddlier than I like).

In the lifetime of the project I think I've had 20-50 feature requests or comments, which suggests it might actually be used by 50-100 people. (Ha! Optimism)

In the meantime I'll keep a careful eye on the number of people who download the tarball & the binary packages...

ObQuote: "I have vermin to kill. " - Kill Bill

| 2 comments

 

Comments on this entry

icon -dsr- at 21:56 on 1 November 2011

In our homemade system (tuttle -- http://dev.smartleaf.com), we leave the transportation method open. In actual use, though, it's rsync over ssh with restricted keys and an odd port.

rsync is tremendously efficient if you have a lot of code to install, and the more so if (as is likely) only small parts change at any given time. Everything is pull.

The rsync source, of course, is a version-controlled checkout.


icon Steve Kemp at 21:59 on 1 November 2011
http://www.steve.org.uk/

I like the idea of the core being transport agnostic, so you could either fetch via rsync, http, or even FTP a "policy.tar.gz" & corresponding signature - but it does have implications for how you handle things.

Specifically I've got a "download file" primitive which allows template expansion, so I'm keen to avoid having to rewrite that for different mechanisms when I only need one. Although that isn't to say that it won't happen.