About Archive Tags RSS Feed

 

Entries posted in January 2010

For the record, that's a question you never have to ask.

8 January 2010 21:50

Five years ago I spent an hour wandering around a large department store looking to buy a kettle & a set of bathroom scales. Much to the amusement of the woman I was shopping with I spent a very long time trying to find the cheapest available set of scales. (We're talking at least 20 minutes, due to the nature of the store and the crowds.)

Once I'd selected the cheapest possible set of bathroom scales we walked over to the kitchen section of the store. I glanced over all the available kettles and picked up the one that looked the nicest (in terms of size, shape, and handle design) with no regard for the price at all.

Why? A set of bathroom scales I use maybe twice a year. A kettle I use in excess of ten times a day. Something you use that often should be right. Even if over time you take it for granted and forget about it. (FWIW the scales were £6.50 and the kettle cost me £39.95 - John Lewis 15/03/2005 - I kept the reciept!)

I'll haggle and quibble over prices for a lot of things, trying to ensure that I don't pay too much. But there are items which are worth paying for (and I don't just mean that "expensive == good" idea some people seem to have). On that basis I'll think nothing of paying £150 for a pair of shoes for example, even though I'll go out of my way to save £5-£10 on a DVD player. Because shoes are important, used very very often, and DVD players just aren't.

(ObReference: I have one pair of shoes. I have five pairs of boots. I might pretend I don't but I also have a pair of sandals. Sshhh it'll be our little secret. ;)

Anyway today my kettle broke. I had to buy a new one at short notice. I did so and the replacement is obviously more advanced. It boils quickly and quietly which is technically an advangtage but in practise is actually a drawback.

Generally speaking I'll fill the kettle, turn it on, then wander away. I'll only return to the kitchen to make my delicous beverage when I hear the "click" signaling that the kettle's job is done. This new one? From outside the kitchen I cannot hear it at all...

In conclusion: Technology and progress is all around us. Sometimes a technical step forward "being quiet" is a bad thing.

In other news I'm fighting with IPv6 & a head cold. Both suck.

ObTitle: Alias

| 3 comments

 

Cover your heart! Cover your heart!

13 January 2010 21:50

I've long been a fan of Danga's Memcached, back in 2005 I wrote about it for the first time.

Recently I've been looking at the persistant version, memcachedb which is essentially a Berkeley Database using the memcached protocol as the transport layer. This allows you to get persistance for free, and means you can rely upon the content being present, rather than just hoping it will be there as an opportunistic cache.

However in this brave new world of nosql I've been mulling over different options for new projects / new toys. I have to say that memcachedb lost out for me, and that Redis is my preferred flavour of new hot.

Redis has a rich API allowing you to store interesting datatypes such as lists, and it allows both replication and distribution (neither of which I need just yet, but both are useful to have as options for the future).

The documentation is great including a well-documented Twitter clone, and the facilities that are available are a step up from memcached, albeit obvious extensions which you'd expect if you were doing interesting things.

The redis-server package is already part of Debian GNU/Linux, and creating .deb package of the Redis module was trivial.

So now that I'm familiar with it, what to use it for? Well I think the first thing will be the new 2010 Valentines site.

Over the years I've hosted several Livejournal Valentine sites, and this year is no different. (Why? I like matching people, and I like social stuff. This is why I did ctrl-alt-delete in the past, even if that failed it was a great learning experience and I'd do it again if I had the time/patience/more optimism).

So far the site only has a couple of users but I fully expect it to ramp up considerably before the magical day (ha!). In the past I've limited the lifetime of the site(s) from Feb 1st to Feb 14th and still managed to achieve 5000+ users. Right now I'm using flat files for storage specifically to avoid database-load issues (which caused me considerable pain one year).

I suspect the backend will be Redis-based in the very very near future, but in the meantime if you're open to amusement and use Livejournal then feel free to poke at it: ljvalentine.com.

ObFilm: Indiana Jones and the Temple of Doom

| 7 comments

 

I don't like this ending...

16 January 2010 21:50

I've talked before about the minimal way in which I've been using a lot of the available automation tools. I tend to use them to carry out only a few operations:

  • Fetch a file from a remote source.
    • If this has changed run some action.
  • Ensure a package is installed.
    • If this is carried out run some action.
  • Run a command on some simple criterion.
    • E.g. Every day at 11pm run a mirror.

In the pub I've had more than a few chats about how to parse a mini-language and carry these operations out, and what facilities other people use. It'd be almost trivial to come up with a mini-language, but the conclusion has always been that such mini-languages aren't expressive enough to give you the arbitrary flexibility some people would desire. (Nested conditionals and the ability to do things on a per-host, per-day, per-arch basis for example.)

It struck me last night that you could instead cheat. Why not run scripting langues directly on your client nodes? Assume you could write your automation in Ruby or Perl and all you need to do is define a few additional primitives.

For example:

#
#  /policies/default.policy - the file that all clients nodes poll.
#

#
#  Fetch the per-node policy if it exists.
#
FetchPolicy $hostname.policy ;

#
#  Ensure SSH is OK
#
FetchPolicy ssh-server.policy ;

#
#  Or explicitly specify the URL:
#
# FetchPolicy http://example.com/policies/ssh-server.policy ;

#
#  Finally a quick fetch of a remote file.
#
if ( FetchFile(
                Source => "/etc/motd",
                Dest => "/etc/motd",
                Owner => "root",
                Group => "root",
                Mode => "0644" ) )
{

    RunCommand( "id" );
}

This default policy attempts to include some other policies which are essentially perl files which have some additional "admin-esque" primitives. Such as "InstallPackage", "PurgePackage", and "FetchFile".

FetchFile is the only one I've fully implemented, but given a server it will fetch http://server/prefix/files/$FILENAME - into a local file, and will setup the owner/gid/mode. If the fetch succeeded and contents differ from the current contents of the named file (or the current file doesn't exist) it will be moved into place and the function will return true.

On the server side I just have a layout that makes sense:

.
|-- files
|   `-- etc
|       |-- motd
|       |-- motd.silver.my.flat
|       `-- motd.gold
`-- policies
    |-- default.policy
    |-- ssh-server.policy
    `-- steve.policy

Here FetchFile has been implemented to first request /files/etc/motd.gold.my.flat, then /files/etc/motd.gold, and finally the global file /files/etc/motd.

In short you don't want to be forced to write perl which would run things like this:

# install ssh
if ( -e "/etc/apt/sources.list" )
{
  # we're probably debian
  system( "apt-get update" );
  system( "apt-get install openssh-server" );
}

You just want to be able to say "Install Package foo", and rely upon the helper library / primitives being implemented correctly enough to be able to have that work.

I'll probably stop there, but it has given me a fair amount to think about. Not least of which : What are the minimum required primitives to usefully automate client nodes?

ObFilm: Moulin Rouge!

| 6 comments

 

Dammit, Martin! This is compressed air!

17 January 2010 21:50

So I previously mentioned I'd knocked up a simple automation tool, for deploying policies (read "scripts") from a central location to a number of distinct machines.

There seemed to be a small amount of interest, so I've written it all up:

  • slaughter - Perl System Administration & Automation tool

Why slaughter? I have no idea. Yesterday evening it made sense, somehow, on the basis it rhymed with auto - (auto as in automation). This morning it made less sense. But meh.

This list of primitives has grown a little and the brief examples probably provide a little bit of flavour.

In short you:

  • Install the package upon a client you wish to manage.
  • When "slaughter" is invoked it will fetch http://example.com/slaughter/default.policy
    • This file may include other policy files via "IncludePolicy" statements.
  • Once all the named policies have been downloaded/expanded they'll be written to a local file.
  • The local file will have Perl-fu wrapped around it such that the Slaughter::linux module is available
    • This is where the definitions for "FetchFile", "Mounts", etc are located.
  • The local file will be executed then removed.

All in all its probably more complex than it needs to be, but I've managed to get interesting things primarily with these new built-in primitives and none of it is massively Debian, or even Linux, specific.

ObSubject: Jaws

| 3 comments

 

We have to be ready to do anything. Do you hear me?

22 January 2010 21:50

Good people steal ideas, right? On that basis I setup a static domain to host the javascript and icons I use upon a few different sites & projects. This was preempted by the release of a new version of the excellent jQuery library.

I also managed to put together a tremendous hack to solve a pretty annoying problem running multiple distributions from a single external kernel under KVM.

Ubuntu users, in particular, will be well aware of dmesg SPAM coming from the use of CONFIG_SYSFS_DEPRECATED.

In short the way that the kernel presents information beneath the /sys tree has changed over the life of the kernel - and this has a knock-on effect to the userspace supplied by different distributions and releases of GNU/Linux.

Some distributions need an "old" kernel and an "old" udev with "old" udev rules in order to create the appropriate device nodes such that the kernel will boot & mount its filesystems. (i.e. These need CONFIG_SYSFS_DEPRECATED to be set.)

Conversely some distributions mandate a "new" minimum kernel version, and supply a "new" version of udev with "new" udev rules and they absolutely will not function when presented with an "old" kernel. (i.e. They must have kernels without CONFIG_SYSFS_DEPRECATED set.)

I've solved this problem via a kernel patch which is both evil and genius. The details are a little me-specific, but in short:

  • devtmpfs is used to setup and mount an initial /dev tree before /sbin/init is launched..
  • udev launches later and mounts a tmpfs over /dev such that it can start creating its own nodes.
  • At this point evil begins: I've patched the kernel such that any attempt to mount a tmpfs filesystem at /dev is silently changed to mount a devtmpfss filesystem instead.
    • The alternative is that udev creates many nodes, but manages to fail to create the root & swap nodes such that the KVM guests fail to boot.

Ultimately udev doesn't get an empty /dev tree to play with, instead it finds one already pre-populated, such that any devices it cannot create are there regardless - because the devtmpfs implementation has already created them.

Genius. And evil. So very evil.

Meh.

Steal that idea. I dare you .. (I'm impressed at how well devtmpfs works, and how easy I was able to make my "patch of evil"tm. Just a few lines in fs/namespace.c.)

ObSubject: The Last House On The Left

| 2 comments

 

What the hell are you laughing at?

24 January 2010 21:50

Slaughter

I received my first patch to slaughter today, which made me happy.

(I've made a new release including it, and updated the list of primitives to actually document the file-deletion facilities which previously I'd omitted to avoid encouraging mass-breakage.)

Signing Binaries

Andrew Pollock mentions that the days of elfsign might be numbered.

This is a shame because I've always liked the idea of signing binaries. Once upon a time, in the 2.4.x days, I wrote a kernel patch which would refuse to execute non-signed binaries. (This was mostly a waste of time; since it denied the execution of shell scripts. Which meant that the system init scripts mostly failed. My solution in the end was to only modprobe my module once the system was up and running, and hope for the best ...)

Right now, having performed only a quick search, I don't see anything like that at the moment.

  • elfsign will let you store a binaries MD5 hash.
  • bsign will let you sign a binary with a GPG key.

But where is the kernel patch to only execute such hashed/signed binaries, preventing the execution of random shell scripts and potentially trojaned binaries?

Without that I think signing binaries is a crazyish thing to do. Sure you can test that a file hasn't been modified, but even without those tools you can do the same thing via md5sums.

(ObRandom: Clearly if you mass-modify all your binaries the system md5sums database will be trashed.)

Perl UTF

I've received a bug report against chronicle, my blog compiler.

It seems that some versions of perl fail to run this:

     #
     #  Run the command, reading stdout.
     #
    open( FILTER, "$cmd|;utf8" ) or
       die "Failed to run filter: $!";

Removing the ;utf8 filter allows things to work, but will trash any UTF-8 characters from the output - so that's a nasty solution.

I'm not sure what the sane solution is here, so I'm going to sit on it for a few days and continue to write test scripts.

ObSubject: 300

| 6 comments