About Archive Tags RSS Feed

 

Entries posted in August 2009

I don't fuck everything that's dead.

6 August 2009 21:50

This has been a hellishly busy & stressful week for various reasons, but today I made my situation a whole lot worse:

root@steve:~#
root@steve:~# cd /etc
root@steve:/etc# find . -name '*' -delete

Ooops.

I got the system back to minimum functionality by re-creating /etc/hostname, /etc/resolv.conf, and restoring backups of passwd, group, and shadow.

Unfortunately attempting to restore things further soon hit a roadblock so I copied /etc/* from a similar machine and patched up hostnames, etc. Unfortunately things like GDM were missing initscripts so the system failed to give me a good workable base.

What is the preferred way to recover from missing files anyway?

Try this:

 rm /etc/init.d/ssh

The naive attempt at recovery is this:

 apt-get install openssh-server --reinstall

But that doesn't restore the file. I'd be curious to know how I should restore it?

The emacs alternative, and all the symlinks from /etc/alternatives generally were missing, and on the whole it was a broken mess. I could start but gdm failed to give me a login due to dbus/random errors.

In the end I archived /home/steve to another box and planned a reimage. (I've currently installed the base system via a PXE-boot, and am installing the X, Emacs, Firefox stuff)

The good news is I've lost no data. The great news is that the installer recognised my LVM setup, and allowed me to re-mount /home without losing the data, or needing to touch the temporary-off-site backup.

In other news tomorrow we have a magical time:

  • Time 12:34:56 07/08/09

I will try to ensure I'm drinking alcohol that very second.

ObFilm: Kissed

| 3 comments

 

Oh, this should be stunning.

8 August 2009 21:50

Recently I've been writing some documentation using the docbook toolset.

"Helpfully" the docbook tools produce a nice table of contents for your documentation. For example it will produce an index.html file containing a list of chapters, list of figures, list of tables, and finally a list of examples.

For my specific use I only wanted a table of contents listing chapters, all the other lists were just noise.

Unfortunately I've produced my documentation using the naive docbook2html tool, and all the details I can find online about customising the table of contents to remove specific items refer to using xslt and other more low-level tools.

So I thought I'd cheat. Looking at the generated index.html file I notice that the contents I wish to remove have got class attributes of TOC.

Is there a tool to parse HTML removing items with particular ID attributes? Or removing items having a particular CLASS?

I couldn't find one. So I knocked one up, using HTML::TreeBuilder::XPath, perhaps it will be useful to ohters:

html-tool --file=index.html --cut-class=foo --indent

The file index.html will be read, parsed, and all items with "class='foo'" will be removed. The output will be indented in a pretty fashion and written to STDOUT.

This example does a similar thing:

html-tool --url=http://www.steve.org.uk/ --output=x.html \
  --cut-id=top --cut-class=mbox --indent

I dabbled with allowing you to just dump HTML sections, so you could run:

html-tool --show-class=foo --file=index.html

But that didn't seem as obviously useful, so I dyked it out. Other similar operations could make it more generally useful though - right now it's more of a html-cut than a html-tool!

ObFilm: The Breakfast Club

| 7 comments

 

We have such sights to show you!

11 August 2009 21:50

Is this web 2.0?

ObFilm: Hellraiser

| No comments

 

You know that name?

15 August 2009 21:50

I found some time earlier today to update my database schema and javascript-fu and make the poll host allow an almost arbitrary number of options:

Is it good to have
 arbitrary options?

Sneakily it also has a second hierarchy for non-public polls, but I'll not advertise that.

Anyway its either vaguely useful, an amusing diversion, or utterly pointless. I'll not ask which!

ObFilm: HellBoy #2

| No comments

 

Thank you for coming back to me.

20 August 2009 21:50

I made a new release of the chronicle blog compiler today, and learned to hate the freshmeat.net website a little more.

The only real change is that now each compiled blog will receive a generated sitemap.xml file containing links to every output page. This will be useful for those folk that use real titles for their posts.

Nothing too much to report upon, although I noted with interest Antti-Juhani Kaijanaho's recent forum installation.

I love the idea of having a forum be a mere wrapper around a real transport system, which supports threading natively - but as I said almost a year ago I'd have done it using Mailing lists and/or Maildir folders....

ObFilm: Brief Encounter.

| No comments

 

The plans you refer to will soon be back in our hands.

22 August 2009 21:50

Many of us use rsync to shuffle data around, either to maintain off-site backups, or to perform random tasks (e.g. uploading a static copy of your generated blog).

I use rsync in many ways myself, but the main thing I use it for is to copy backups across a number of hosts. (Either actual backups, or stores of Maildirs, or similar.)

Imagine you backup your MySQL database to a local system, and you keep five days of history in case of accidental error and deletion. Chances are that you'll have something like this:

/var/backups/mysql/0/
/var/backups/mysql/1/
/var/backups/mysql/2/
/var/backups/mysql/3/
/var/backups/mysql/4/

(Here I guess it is obvious that you backup to /mysql/0, after rotating the contents of 0->1, 1->2, 2->3, & 3->4)

Now consider what happens when that rotation happens and you rsync to an off-site location afterward: You're copying way more data around than you need to because each directory will have different content every day.

To solve this I moved to storing my backups in directories such as this:

/var/backups/mysql/9-03-2009/
/var/backups/mysql/10-03-2009/
/var/backups/mysql/11-03-2009/
..

This probably simplifies the backup process a little too: just backup to $(date +%d-%m-%Y) after removing any directory older than four days.

Imagine you rsync now? The contents of previous days won't change at all, so you'll end up moving significantly less data around.

This is a deliberately contrived and simple example, but it also applies to common everyday logfiles such as /var/log/syslog, syslog.1, syslog.2.gz etc.

For example on my systems qpsmtpd.log is huge, and my apache access.log files are also very large.

Perhaps food for thought? One of those things that is obvious when you think about it, but doesn't jump out at you unless you schedule rsync to run very frequently and notice that it doesn't work as well as it "should".

ObFilm: Star Wars. The Family Guy version ;)

| 12 comments

 

Has he tried to speak or communicate in any way?

25 August 2009 21:50

See? Steam power does have its uses!

To avoid this becoming my most content-free post ever I'll close by saying that I updated the html-tool utility as per Sven Mueller's suggestion. So you can now show/dump arbitrary class or ID values from HTML.

Oh, and I added an about page to my blog.

ObFilm: Seven (Not Se7en - that's just dumb.)

| No comments