About Archive Tags RSS Feed


Entries tagged cfengine

Lets go dancing on the backs of the bruised

26 December 2006 21:50

My Christmas day was primarily spent at home watching Red Dwarf.

Apart from that I made new releases of xen-shell and xen-tools.

I was suprised to recieve a sudden flurry of bug reports. No less than ten bugs reported by two users in one day! Still I fixed most of them and made a new release.

It was nice to notice Isaac Clerencia writing about using xen-tools for automated testing. The idea has occurred to me before: Create a fresh Xen installation of Debian then run a test suite and trash the system afterwards. I even setup a test Yawns builder for a while.

Are people finding interesting users for xen-tools? I'd love to have some use-cases, and more user-feedback is always appreciated. Feel free to get in touch or comment here.

Personally I use a few Xen guests on my home LAN and I regularly create new Sarge images for building backported Xen packages. Only recently did I move my home cfengine setup onto a domU.

In other news I recently received a copy of Programming Ruby from my Amazon wishlist, I'm learning a lot about Ruby and I think I'm in love. The only downside is the mental confusion I experience when I see "@variable" and "$variable" used. I am too wedded to Perl to stop thinking "array" and "string" respectively.

I should find a toy project to write in Ruby to get used to it. Right now nothing occurs, but I'm sure it is just a matter of time.

| No comments


Time is like a fuse, short and burning fast,

28 January 2007 21:50


Busy weekend: A brief trip to Glasgow, a lot of rat-lovin' and a few minutes to hack on CFEngine package automation.

I discovered insane performance problems with running Ruby on Rails applications under libapache2-mod-fastcgi so I switched to Apache2, mod_proxy & mongrel.

The process wasn't as painless as I'd expected - so I might write it up shortly.

In other rails-news: I can't get typo running on Etch. Migration from wordpress 2 fails in strange ways. Help?!

In other news xmms on my AMD64 machine, reading /home/mp3 over an NFS share locks up every couple of days. Not sure if this is an AMD64 bit problem, or if the NFS usage is responsible. I should investigate. (The GUI is visible and responsible; but hitting "next", "play", etc, result in no action. I usually have to "kill" it - then restart.)

| No comments


A new and glorious moment

18 May 2007 21:50

Puppet the system-administration tool, similar to CFengine is very nice.

What is less-nice is the lack of decent examples. The wiki has lots of "recipes", however these give no real explaination of how to install them.

I know that I need manifests/site.pp which will control which nodes get which actions applied to them, but beyond that I'm a little lost!

I'm hoping to replicate my cfengine setup. Most of it is pretty simple "append line to file if missing", and copying files from the central server. (The latter I've got working nicely.)

Hopefully somebody will now point me at a good tutorial that doesn't stop once it is installed (like this one.)

| No comments


Only after disaster can we be resurrected

6 May 2008 21:50

I leave my main desktop logged in for months a time; as demonstrated by my previous bug with the keyboard transition for xorg.

The screen is setup to lock after 5 minutes of idle, so there's no real security issue, and it is extremely convenient.

Every few weeks though my desktop gets into a funny state where no new windows may be opened.. Existing applications continue running without any problems, but no new windows/shells/whatever may be opened.

Tonight it happened again.

And the lightbulb went on in my head: My flat uses CFEngine to manage itself. (Two physical servers here, with 5-10 Xen guests, and a number of remote servers.)

One of the things that CFengine is configued to do is to tidy directories of files which are older than 30 days. Including /tmp.

So that explains that.

Every month the magic cookie in $TMP would be nuked, and X would disallow new connections.

I guess the next time this happens I should look at using Xauth to fix the issue, but generally I just logout, make coffee, smoke a cigarette, and login again.

In conclusion: I'm a stupid-head.

ObQuote: Fight Club



And if someone gets upset you say, "chill out"!

25 December 2009 21:50

It was interesting to see Clint Adams describe love and dissatification with configuration management.

At work I've got control of 150(ish) machines which are managed via CFEngine. These machines are exclusively running Debian Lenny. In addition to these hosts we also have several machines running Solaris, OpenBSD, and various Ubuntu releases for different purposes.

Unfortunately I made a mistake when I setup the CFEngine infrastructure and when writing all the policies, files, etc, I essentially said "OK CFEngine controlled? Then it is Debian". (This has been slowly changing over time, but not very quickly.)

But in short this means that the machines running *BSD, Solaris, and non-Debian distributions haven't been managed as well via CFEngine as the rest, even though technically they could have been.

A while back I decided that it was time to deal with this situation. Looking around the various options it seemed Puppet was the way of the future and using that we could rewrite/port our policies and make sure they were both cleanly organised and made no assumptions.

So I setup a puppetmaster machine, then I installed the client on a range of client machines (openbsd, debian lenny, ubuntu, solaris) so that I could convince myself my approach was valid, and that the tool itself could do everything I wanted it to do.

Unfortunately using puppet soon became painful. It has primitives for doing various things such as maintaining local users, working with cronjobs, and similar. Unfortunately not all primitives work upon all platforms, which kinda makes me think "what's the point?". For example the puppet client running upon FreeBSD will let you add a local user, setup a ~/.ssh/authorized_keys file but will not let you setup a password. (Which means you can add users who can login, but then cannot use sudo. Subpar)

At this point I've taken a step back. As I think I've mentioned before I don't actually do too much with CFEngine. Just a few jobs:

  • Fetch a file from the master machine and copy into the local filesystem. (Making no changes.)
  • Fetch a file from the master machine, move it to the local system after applying a simple edit. (e.g "s/##HOSTNAME##/`hostname`/g")
  • Install a package.
  • Purge a package.
  • Setup local user accounts, with ~/.ssh handled properly.
  • Apply one-line sed-style edits to files. (e.g. "s/ENABLED=no/ENABLED=yes/" /etc/default/foo)

(i.e. I don't use cron facilities, I add files to cron directories. Similarly I don't use process monitoring, instead I install the monit package and drop /etc/monit/monitrc into place.)

There is a pretty big decision to make in the future with the alternatives being:

  • Look at Chef.
  • Stick with CFEngine but start again with a better layout, with more care and attention to portability things.
  • Replace the whole mess with in-house-fu.

If we ignore the handling of local users, and sudo setup, then the tasks that remain are almost trivial. Creating a simple parser for a "toy-language" which can let you define copies, edits, and package operations would be an afternoons work. Then add some openssl key authentication and you've got a cfengine-lite.

For the moment I'm punting the decision but I'm 90% certain that the choice is CFEngine vs. Chef vs. In-House-Fu - and that puppet is no longer under consideration.

Anyway despite having taken months to arrive at this point I'm going to continue to punt. Instead my plan is to move toward using LDAP for all user management, login stuff, and sudo management. That will be useful in its own right, and it will coincidentally mean that whatever management system we do end up using will have on less task to deal with. (Which can only be a good thing.)

ObFilm: Terminator II



I don't like this ending...

16 January 2010 21:50

I've talked before about the minimal way in which I've been using a lot of the available automation tools. I tend to use them to carry out only a few operations:

  • Fetch a file from a remote source.
    • If this has changed run some action.
  • Ensure a package is installed.
    • If this is carried out run some action.
  • Run a command on some simple criterion.
    • E.g. Every day at 11pm run a mirror.

In the pub I've had more than a few chats about how to parse a mini-language and carry these operations out, and what facilities other people use. It'd be almost trivial to come up with a mini-language, but the conclusion has always been that such mini-languages aren't expressive enough to give you the arbitrary flexibility some people would desire. (Nested conditionals and the ability to do things on a per-host, per-day, per-arch basis for example.)

It struck me last night that you could instead cheat. Why not run scripting langues directly on your client nodes? Assume you could write your automation in Ruby or Perl and all you need to do is define a few additional primitives.

For example:

#  /policies/default.policy - the file that all clients nodes poll.

#  Fetch the per-node policy if it exists.
FetchPolicy $hostname.policy ;

#  Ensure SSH is OK
FetchPolicy ssh-server.policy ;

#  Or explicitly specify the URL:
# FetchPolicy http://example.com/policies/ssh-server.policy ;

#  Finally a quick fetch of a remote file.
if ( FetchFile(
                Source => "/etc/motd",
                Dest => "/etc/motd",
                Owner => "root",
                Group => "root",
                Mode => "0644" ) )

    RunCommand( "id" );

This default policy attempts to include some other policies which are essentially perl files which have some additional "admin-esque" primitives. Such as "InstallPackage", "PurgePackage", and "FetchFile".

FetchFile is the only one I've fully implemented, but given a server it will fetch http://server/prefix/files/$FILENAME - into a local file, and will setup the owner/gid/mode. If the fetch succeeded and contents differ from the current contents of the named file (or the current file doesn't exist) it will be moved into place and the function will return true.

On the server side I just have a layout that makes sense:

|-- files
|   `-- etc
|       |-- motd
|       |-- motd.silver.my.flat
|       `-- motd.gold
`-- policies
    |-- default.policy
    |-- ssh-server.policy
    `-- steve.policy

Here FetchFile has been implemented to first request /files/etc/motd.gold.my.flat, then /files/etc/motd.gold, and finally the global file /files/etc/motd.

In short you don't want to be forced to write perl which would run things like this:

# install ssh
if ( -e "/etc/apt/sources.list" )
  # we're probably debian
  system( "apt-get update" );
  system( "apt-get install openssh-server" );

You just want to be able to say "Install Package foo", and rely upon the helper library / primitives being implemented correctly enough to be able to have that work.

I'll probably stop there, but it has given me a fair amount to think about. Not least of which : What are the minimum required primitives to usefully automate client nodes?

ObFilm: Moulin Rouge!



More competition for server management and automation is good

2 February 2013 21:50

It was interesting to read recently from Martin F. Krafft a botnet-like configuration management proposal.

Professionally I've used CFEngine, which in version 2.x, supported a bare minimum of primitives, along with a distribution systme to control access to a central server. Using thse minimal primitives you could do almost anything:

  • Copy files, and restart services that depend upon them.
  • Make minor edits to files. (Appending lines not present, replacing lines you no longer wanted, etc)
  • Installing / Removing packages.
  • More ..

Now I have my mini cluster (and even before that when I had 3-5 machines) it was time to look around for something for myself.

I didn't like the overhead of puppet, and many of the other systems. Similarly I didn't want to mess around with weird configuration systems. From CFEngine I'd learned that using only a few simple primitives would be sufficient to manage many machines provided you could wrap them in a real language - for control flow, loops, conditionals, etc. What more natural choice was there than perl, the sysadmin army-knife?

To that end slaughter was born:

  • Download polices (i.e. rules) to apply from a central machine using nothing more complex than HTTP.
  • Entirely client-driven, and scheduled via cron.

Over time it evolved so that HTTP wasn't the only transport. Now you can fetch your policies, and the files you might serve, via git, hg, rsync, http, and more.

Today I've added one final addition, and now it is possible to distribute "modules" alongside policies and files. Modules are nothing more than perl modules, so they can be as portable as you are careful.

I envisage writing a couple of sample modules; for example one allowing you to list available sites in Apache, disable the live ones, enable/disable mod_rewrite, etc.

These modules will be decoupled from the policies, and will thus be shareable.

Anyway , I'm always curious to learn about configuration management systems but I think that even though I've reinvented the wheel I've done so usefully. The DSL that other systems use can be fiddly and annoying - using a real language at the core of the system seems like a good win.

There are systems layered upon SSH, such as fabric, ansible, etc, and that was almost a route I went down - but ultimately I prefer the notion of client-pull to server-push, although it is possible in the future we'll launche a mini-daemon to allow a central host/hosts to initial a run.