About Archive Tags RSS Feed

 

Entries posted in October 2010

The pain of a new IP address

13 October 2010 21:50

Tonight I was having some connectivity issues, so after much diagnostic time and pain I decided to reboot my router. At the moment my home router came back my (external) IP address changed, and suddenly I found I could no longer login to my main site.

Happily however I have serial console access, and I updated things such that my new IP address was included in the hosts.allow file. [*]

The next step was to push that change round my other boxes, and happily I have my own tool slaughter which allows me to make such global changes in a client-pulled fashion. 60 minutes later cron did its magic and I was back.

This reminds me that I let the slaughter tool stagnate. Mostly because I only use it to cover my three remote boxes and my desktop, and although I received one bug report (+fix!) I never heard of anybody else using it.

I continue to use and like CFEngine at work. Puppet & Chef have been well argued against elsewhere, and I'm still to investigate BFG2 + FAI.

Mostly I'm happy with slaughter. My policies are simple, readable, and intuitive. Learn perl? Learn the "CopyFile" and you're done. For example.

By contrast the notion of state machines, functional operations, and similar seem over-engineered in other tools. Perhaps thats my bug, perhaps that's just the way things are - but the rants linked to above makes sense to me and I find myself agreeing 100%.

Anyway; slaughter? What I want to do is rework it such that all policies are served via rsync and not via HTTP. Other changes, such as the addition of new primitives, don't actually seem necessary. But serving content via rsync just seems like the right way to go. (The main benefit is recursive copies of files become trivial.)

I'd also add the ability to mandate GPG-signatures on policies, but that's possible even now. The only step backwards I see is that currently I can serve content over SSL, but that should be fixable even if via stunnel.


*

My /etc/hosts.allow file contains this:

ALL: 127.0.0.1
ALL: /etc/hosts.allow.trusted
ALL: /etc/hosts.allow.trusted.apache

Then hosts.allow.trusted contains:

# www.steve.org.uk
80.68.85.46

# www.debian-administration.org
80.68.80.176

# my home.
82.41.x.x

I've never seen anybody describe something similar, though to be fair it is documented. To me it just seems clean to limit the IPs in a single place.

To conclude hosts.allow.trusted.apache is owned by root.www-data, and can be updated via a simple CGI script - which allows me to add a single IP address on the fly for the next 60 minutes. Neat.

ObQuote: Tony is a little boy that lives in my mouth. - The Shining

| 1 comment

 

Recently I've been working with flash

19 October 2010 21:50

Recently I've been producing simple Flash animations.

Mostly these are simple "Show an image, slide it around a bit, show another..". But I still feel vaguely unclean and non-free.

I started off using the SWF Perl binding, but soon realised that wasn't much fun. So I wrote a mini intepretter such that I can script creation:

#!/usr/bin/flash-scripter

# create the movie
create 640 480

# background == black
clear 0, 0, 0

# load image at 0,0
load 0, 0, foo.png

# move it about a bit
move 0, -1
move 0, -1
move 0, -1

# movie-time is over now.
stop

# finally save the movie
save foo.swf

# all done
exit

I see I'm not alone in doing such a thing as swftools (not available for Debian) includes swfc "A tool for creating SWF files from simple script files.". Sadly swftools fails to build for me on Squeeze so I couldn't try it out.

My little tool is called SWF::Scripter and uses plugins to implement each "command" so I should probably upload it somewhere public. On the other hand it was a quick hack to produce a mini-story from a bunch of images with no complex transitions so I'm not sure it is worth the effort.

ObQuote: "I honestly think I'd give up smoking if he asked me." - Breakfast at Tiffany's

| 7 comments

 

Delivering to a Maildir folder, but marking as read

25 October 2010 21:50

Email. We get a lot of it. We filter out the SPAM. Then if we're well-organized we file it away into folders as it arrives. (To be fair some people use priority settings such that all mail stays in their INBOX until they're "done" with it. I've never had the patience for that kind of behaviour.)

One problem which I often encounter is wanting to have email be delivered, archived, and stored, but I don't want to read it. Yet when I see a folder in my mail client which has unread mail in it I cannot unsee.

In my case I deliver mail to folders as it arrives via either Exim's filter language, or procmail. Procmail blows goats on the whole, but it is common and available everywhere - I just don't trust my own DSL enough to rely upon it (which is a dangerous sign).

So, how do you deliver a new mail to a Maildir folder, and mark it read at the same time? Well you can be naïve and do what I did which is to invoke formail to add "Status: ro" to the header. Unfortunately that's insufficient to mark a mail as read when viewed in mutt.

When an email is stored in a Maildir folder its status is encoded in part of its filename - which is why you'll have files like:

new/1288039894.28406_3.steve.org.uk
cur/1283711971.11157_3.skx.xen-hosting.net:2,S

The latter file has been Seen. So to mark a message as not-new you need to do two things:

  • Save it to ./cur/ not ./new/.
  • Append the appropriate flags to the filename (generally :2,S).

I've seen some horrific shell + procmail code to do the job, but the simpler version is:

:0
*^(To:|From:).*root@
*^X-added-header:.*debian-administration.org
| ~/bin/read-to-maildir .machines.debian-administration.org/

Similarly you can use Exim's filter language to do the same job:

# Exim filter
if $h_to: contains "hostmaster@" then
    pipe "/home/skx/bin/read-to-maildir /home/steve/Maildir/.hostmaster/"
finish

Cute. Obvious too? Maybe not to me.

ObSubject: Why did you murder someone, Raymond? - In Bruges

| 4 comments