About Archive Tags RSS Feed


Entries posted in July 2010

New backported packages!

10 July 2010 21:50

Since I'm I'm using real titles I guess I should make a real post, in which real things are mentioned. Unfortunately recently most of my time has been spent offline, doing things in and around Edinburgh.

However I have done a few things which are possibly worthy of mention. My Lenny repository has been updated a little:

The Gimp

There's a slightly newer version of The Gimp available now, corresponding to a recent upload to unstable.


Once again I was forced to update the backported gtk-gnutella package, as my previous one was too old to connect to the network.


Finally I added a Lenny package for the itag software which is now essentially complete.

Of those things I had a lot of fun with the itag software. Partly because it allows me to horde my images in a way that I appreciate, but also because it made me go over some older images and be pleasantly suprised.

My personal archive, ~/Images, is now just over 80Gb, and goes back about ten years. (Of course the older images were taken with random point and shoot digital cameras and each images is only a few hundred k in size. The newer images, saved at full-resolution, may be 5Mb each.)

Otherwise I've been slowly deploying OpenLDAP in anger, which has been educational. I've got a minor problem to solve which is that (posix)group definitions don't seem to be working reliably, but otherwise I've got Apache authenticating against groups, SSH logins working, and the little brother database using the LDAP server as an address book. (Mail clients? mutt is the one true mail client. notmuchmail.org will be interesting when further developed, but everything else I'm going to ignore with my stubborn Yorkshire nature ;)

ObQuote: "Oh no no no, dead broad OFF THE TABLE!", from Shrek.

| No comments


My Linksys router now runs Linux and almost provides PXE.

22 July 2010 21:50

I've been interesting in running Linux upon my router for a long time, but I never had a really compelling reason to do so. The potential for brickage was always too high to make me want to experiment for the sake of it.

However last night I installed Gargoyle upon my Linksys WRT53GL. Although I have no single compelling reason to do so there were a few things on my mind which made me risk it:


I thought it would be nice to log things to my desktop machine.


I often run rsync to mirror my photographs, videos, and files, to off-site locations. These are then replicated via chironfs.

Being able to use QoS to prioritise SSH traffic, which is the transport I use for rsync, means I don't suffer from laggy connections.

Graphing & Statistics

Having statistics and traffic information is interesting.

Since I've only just installed it I've not had too much opportunity to experiment with it - and my initial forays were not so productive. For example "opkg install tcpdump" failed as the root filesystem wasn't big enough.

However which was to update the router to function as PXE server. I installed the tftpd server:

opkg install tftpd-hpa

Then I added this to /etc/dnsmasq.conf:


Only after I'd done this did I realise two things:

  • I don't have the space on the router to host the pxelinux.0 file, and the associated Debian netboot installer.
  • Chances are I could just use the built-in TFTP support of dnsmasq. ("enable-tftp" + "tftp-root=/tmp/tftproot".)

Tomorrow, (after visiting the dentist. Uggh) I will experiment with this further. Right now it looks like I have two options:

  • NFS mount the TFTP root, but keep both DHCP+TFTPD running upon the router.
  • Configure the router to load the files from another machine, by updating dnsmasq.conf to read: dhcp-boot=pxelinux.0,random.host.name,192.168.1.xx.

Either way I have to store the files upon another host, due to the constrained space on the router's filsystem. So the question becomes "Which service should I run externally: TFPTD or NFS?".

Running TFTPD, upon my desktop, seems smaller, less of a security risk, and neater. Running TFTPD also avoids issues if I reboot both the router and my desktop at the same time as a stalling NFS mount could prevent a timely router-boot.

ObQuote: Looking for a secret door. Places like this always have a secret door.

- St Trinian's 2: The Legend of Fritton's Gold



I'm a CPAN author.

23 July 2010 21:50

As of this morning I'm a published author on CPAN!

Thus far I have only a single module to my name, but that will most likely change in the future:


A module for storing (CGI) session data within a Redis database.

A while back I setup a dynamic website which was 100% redis backed, using my redis backports for lenny, and realised I needed somewhere to store the session data too. Hence this module.

I'll create a .deb package of the module, and stick it alongside the redis server.

ObQuote:I like to keep this handy... for close encounters.


| No comments


My router serves Debian installer via gPXE

25 July 2010 21:50

My Linksys router now serves the LAN with netboot images, allowing the simple installation of Debian GNU/Linux.

I updated /etc/dnsmasq.conf on the router itself to read:

# gPXE sends a 175 option.

# serve the "undionly.kpxe" file by default.

# which will then pull the config via HTTP

Then placed /tftproot/undionly.kpxe in place on the router. This combination of files causes:

  • The machine to netboot via undionly.kpxe (think of this like pxelinux.0.)
  • The file gpxe.cfg to be fetched via HTTP
  • Which in turn loads a simple menu.conf file, again via HTTP
  • At this point the user can select the flavour of Debian installer to run.

I have only one problem - it seems that adding the expected entry to boot locally fails:


LABEL Local.Disk
   bootlocal 0

Update: After a fresh sleep and more trial and error I discovered this works:

LABEL Local.Boot
   KERNEL local/chain.c32

It gives a temporary warning about invalid sector number - but actually does boot locally by default, on both my EEEPC and my desktop machine.

If you want to snarf my netboot environment you are welcome to mirror :

Why did I do this in the first place? I wanted to install Squeeze upon my EEEPC (now done.)

ObQuote: Nice Cross. How'd you get the blood off?

- Dead Like Me

| No comments


sysadmin.im considered harmful

25 July 2010 21:50

Not for the first time I find my blog content copied and hosted elsewhere. This time via http://sysadmin.im/.

Mostly I care little if people rehost my content. But when people claim to have written it (e.g. "Posted by Admin") I get annoyed.

No explicit contact details are posted, probably to avoid complaints.

Update: Fixed URL. Stupid do.tted.na.mes.



Sometimes you just wonder would other people like this?

27 July 2010 21:50

Sometimes I write things that are for myself, and later decide to release on the off-chance other people might be interested.

I've hated procmail for a long time, but it is extremely flexible, and for the longest time I figured since I'd got things working the way I wanted there was little point changing.

When it comes to procmail there are few alternatives:

Unfortunately both Exim and Email::Filter suffer from a lack of "pipe" support. To be more specific Exim filters and Email::Filter allow you to pipe an incoming message to an external program - but they regard that as the end of the delivery process.

So, for example, you cannot receive a message (on STDIN), pipe it through crm114, then process that updated message. (i.e. The output of crm114).

Maildrop does allow pipes, but suffers from other problems which makes me "not like it".

My own approach is to have a simple mail-sieve command which is configured thusly:

set maildir=/home/steve/Maildir
set logfile=/home/.trash.d/mail-sieve.log

#  Null-senders
Return-Path: /<>/        save .Automated.bounces/

#  Spam filter
filter /usr/bin/crm -u /home/steve/.crm /usr/share/crm114/mailreaver.crm

#  Spam?
X-CRM114-Status: /SPAM/   save .CRM.Spam/
X-CRM114-Status: /Unsure/ save .CRM.Unsure/

#  People / Lists
From: /foo@example.com/  save .people.foo/
From: /bar@example.com/  save .people.bar/

#  Domains
To: /steve.org.uk$/               save .steve.org.uk/
To: /debian-administration.org$/  save .debian-administration.org.personal/

#  All done.
save .inbox.unfiled/

On the one hand this is simple, readable, and complete enough for myself. On the other hand if I were going to make it releasable I think I'd probably want to add both conditionals and the ability to match upon multiple header values.

Getting there would probably involve something like this on the ~/.mail-filter side :

if ( ( From: /foo@example.com ) ||
     ( From: /bar@example.com ) )
   save .people.example.com/
# ps. remind me how much I hate parsers and lexers?

That starts to look very much like Exim's filter language, at which point I think "why should I bother". Pragmatically the simplest solution would be to add a "Filter" primitive to Email::Filter - and pretend I understood the nasty "Exit" settings.

ObQuote: Andre, we don't use profanity or double negatives here at True Directions. - "But I'm a Cheerleader".



I've accidentally written a replication-friendly filesystem

29 July 2010 21:50

This evening I was mulling over a recurring desire for a simple, scalable, and robust replication filesystem. These days there are several out there, including Gluster.

For the past year I've personally been using chironfs for my replication needs - I have /shared mounted upon a number of machines and a write to it on any will be almost immediately reflected in the others.

This evening, when mulling over a performance problem with Gluster I was suddenly struck by the idea "Hey, Redis is fast. Right? How hard could it be?".

Although Redis is another one of those new-fangled key/value stores it has several other useful primitives, such as "SETS" and "LISTS". Imagine a filesystem which looks like this:


Couldn't we store those entries as members of a set? So we'd have:

  SET ENTRIES:/              -> srv, tmp, var
  SET ENTRIES:/var/spool     -> tmp
  SET ENTRIES:/var/spool/tmp -> (nil)

If you do that "readdir(path):" becomes merely "SMEMBERS entries:$path" ("SMEMBERS foo" being "members of the set named foo"). At this point you can add and remove directories with ease.

The next step, given an entry in a directory "/tmp", called "bob", is working out the most important things:

  • Is /tmp/bob a directory?
    • Read the key DIRECTORIES:/tmp/bob - if that contains a value it is.
  • What is the owner of /tmp/bob?
    • Read the key FILES:/tmp/bob:UID.
  • If this is a file what is the size? What are the contents?
    • Read the key FILES:/tmp/bob:size for the size.
    • Read the key FILES:/tmp/bob:data for the contents.

So with a little creative thought you end up with a filesystem which is entirely stored in Redis. At this point you're thinking "Oooh shiny. Fast and shiny". But then you think "Redis has built in replication support..."

Not bad.

My code is a little rough and ready, using libfus2 & the hiredis C API for Redis. If there's interest I'll share it somewhere.

It should be noted that currently there are two limitations with Redis:

  • All data must fit inside RAM.
  • Master<->Slave replication is trivial, and is the only kind of replication you get.

In real terms the second limitation is the killer. You could connect to the Redis server on HostA from many locations - so you get a true replicated server. Given that the binary protocol is simple this might actually be practical in the real-world. My testing so far seems "fine", but I'll need to stress it some more to be sure.

Alternatively you could bind the filesystem to the redis server running upon localhost on multiple machines - one redis server would be the master, and the rest would be slaves. That gives you a filesystem which is read-only on all but one host, but if that master host is updated the slaves see it "immediately". (Does that setup even have a name? I'm thinking of master-write, slave-read, and that gets cumbersome.)

ObQuote: Please, please, please drive faster! -- Wanted