About Archive Tags RSS Feed

 

Entries posted in October 2009

Poppa's got a brand new bang.

6 October 2009 21:50

Recently I posted a brief tool for managing "dotfile collections". This tool was the rationalisation of a couple of adhoc scripts I already used, and was a quick hack written in nasty bash.

I've updated my tool so that it is coded in slightly less nasty Perl. You can find the dotfile-manager repository online now.

This tool works well with my dotfile repository, and the matching, but non-public dotfiles-private repository.

I'm suspect that this post might flood a couple of feed agregators, because I've recently my chronicle blog compiler with a new release. This release has updated all the supplied themes/templates such that they validate strictly, and as part of that I had to edit some of my prior blog entries to remove bogus HTML markup. (Usually simple things suck as failing to escape & characters correctly, or using "[p][/P]" due to sloppy shift-driving.)

I should probably update the way I post entries, and use markdown or textile instead of manually writing HTML inside Emacs, but the habit has been here for too long. Even back when I used wordpress I wrote my entries in HTML...

Finally one other change in the most recent chronicle release is that the "mail-scanning.com theme" has been removed, as the service itself is no longer available. But all is not lost.

ObFilm: Blade II

| No comments

 

There's no such thing as a wrong war

13 October 2009 21:50

Once upon a time I wrote a blog compiler, a simple tool that would read in a bunch of text files and output a blog. This blog would contain little hierarchies for tags, historical archives, etc. It would also have a number of RSS feeds too.

Every now and again somebody will compare it to ikiwiki and I'll ignore that comparison entirely, because the two tools do different things in completely different fashions.

But I was interested to see Joey talk about performance tweaks recently as I have a blog which has about 900 pages, and which takes just over 2 minutes to build from start to finish. (Not this one!)

I've been pondering performance for a while as I know my current approach is not suited to high speed. Currently the compiler reads in every entry and builds a giant data structure in memory which is walked in different fashions to generate and output pages.

The speed issue comes about because storing the data structure entirely in memory is insane, and because sometimes a single entry will be read from disk multiple times.

I've made some changes over the past few evenings such that a single blog entry will be read no more than once from disk (and perhaps zero times if Memcached is in use :) but that doesn't solve the problem of the memory usage.

So last night I made a quick hack - using my introduction to SQLite as inspiration I wrote a minimal reimplementation of chronicle which does things differently:

  • Creates a temporary SQLite database with tables: posts, tags, comments.
  • Reads every blog entry and inserts it into the database.
  • Uses the database to output pages.
  • Deletes the database.

This is a significantly faster approach than the previous one - with a "make steve" job taking only 18 seconds, down from just over 2 minutes 5 seconds.

("make steve" uses rsync to pull in comments on entries, rebuilds the blog, then uses rsync to push the generated output into its live location.)

ObFilm: If...

| 6 comments

 

Big change. Sometimes good. Sometimes bad.

20 October 2009 21:50

Over the weekend I bit the bullet and purchased a new camera, a Canon EOS 1000d (many page revie).

I've taken a few hundred shots with it so far, trying to get a feel for what it can do, and what I can do now that I couldn't before </Lady Teldra>.

As my first DSLR its a pretty significant upgrade and I'm enjoying it a lot - especially since I've managed to persuade a few local people to pose for me.

Now is the time to wonder how to store, share, and organise the pictures I've taken and will be taking in the future. Currently all my pictures are beneath ~/Images, replicated across a number of machines for redundancy. As a sample I have:

skx@gold:~$ ls /home/skx/Images/
Computer  Flat  Misc  Parties  People  Pets & Animals  Travel

Beneath these top-level directories I have more directories for specific items, such as ~/Images/Travel/2009/York, or ~/Images/People/kelly.

I think that I should probably be looking at using some image-manager application to allow me to tag, date, and export images more easily though.

Right now I'm not really sharing anything except a few sample shots:

Those are quite nice shots, but I suspect sooner or later I will have pictures I do wish to share properly. So I need to come up with an URL scheme, or a library tool which will export specific shots and keep the rest of my archive private by default.

Any suggestions are welcome ..

ObFilm: The Dark Crystal

| 10 comments

 

Look are you gonna step outside or do I have to drag you?

25 October 2009 21:50

Over the past couple of months the machine which hosts the Debian Administration website has been struggling with two distinct problems:

The dreaded scheduler bug/issue

The machine would frequently hang with the messages of the form:

Task xxx blocked for more than 120 seconds

This would usually require the application of raised elephants to recover from.

OOM-issues

The system would exhaust the generous 2Gb of memory it possessed, and start killing random tasks until the memory usage fell - at which point the server itself stopped functioning in a useful manner.

Hopefully these problems are now over:

The combination of these two changes should resolve the memory issues, and I've installed a home-made 2.6.31.4 kernel which appears to have corrected the task-blocking scheduler issue.

ObTitle: Bridget Jones: The Edge of Reason

| 2 comments