About Archive Tags RSS Feed

 

Entries posted in July 2009

I bet you're a real tiger in disguise.

5 July 2009 21:50

I've not been online much for the past week, for two main reasons:

My cat was injured

Usually my cat lives outdoors for just over half the day, but last week he came back with a bad limp and since then he's been an indoor kitty.

He stopped limping so badly pretty much the next day, but has managed to scratch/bite a fair amount of fur from his leg worrying at it.

Still that seems to have stopped and I'm sure he'll be fine. Once the fur has grown back I'll throw him back outside and hope he's more careful in the future!

(No idea whether it was a bad fall, a collision, or a fight with a cat/dog/fox/squirrel that caused it .. no obvious injuries when he was at the vets such as bites, scratches, or things embedded in the limb.)

Geomtery Wars Galaxies

I might be a bit slow, but this nintendo DS game rocks. Hard.

End Communiqué

ObFilm: Faster, Pussycat! Kill! Kill!

| 5 comments

 

Why do you keep torturing yourself?

9 July 2009 21:50

Recently I came to realise that my planning and memory skills weren't adequate to keeping track of what I want to do, and what I need to do.

For a while I've been fooling myself into thinking than "emacs ~/TODO" was a good way to keep track of tasks. It really isn't - especially if you work upon multiple machines throughout the week.

So I figured I needed something "always available", which these days mostly means an online application / website.

Over the years I've looked at many multi-user online "todo-list" applications, and inevitably they all suck. Mostly they suck because they're either too rigid or don't meet my particular way of working, living, and doing "stuff".

To my mind :

  • A todo-list should above all make it easy to add tasks.
    • If you cannot easily add tasks then you won't. So you'll never use it.
  • A task might be open or closed, but it will never be 23.55% complete.
  • A task might be "urgent" or not, but it will never be "urgent", "semi-urgent", "do soon", "do today".
  • A task might have many steps but they should either be added separately, or the steps noted in some notes.
    • I find the notion of making "task A" depend upon "task B" perilous.
  • A task belongs to one person. It cannot be moved, shared, or split.

Some of those things such as subtasks and completion percentages I guess are more application to project management software. Those and time-sheet applications really wind me up.

With my mini-constraints in mind I sketched out a couple of prototypes. The one I was expecting to use had a typical three-pane view:

[ Task Menu ]     |  Task1: Buy potatoes
                  |  Task2: Remember to check email
  All Tasks       |  Task3: Order more cake.
  Completed Tasks |------------------------------------
  Urgent Tasks    |
                  |  [Taske Details here]
  Tags:           |
   * Work         |  [Urgent | Non-Urgent ]
   * Fun          |
   * Birthdays    |  [Close Task | Re-OPen Task ]
   * Parties      |
   * Shopping     |  [Notes ..]
   ...

That turned out to be a pain to implement, and also a little unwieldy. I guess trying to treat a tasklist as a collection of email is a difficult proposition at best - but more on that in my next post.

So a quick rethink and now I've came up with a simple but functional layout based upon the notions that:

  • Adding tasks must be almost too easy.
  • Most tasks only need a one-line description.
  • Adding tags is good. Because tasks cross boundaries.
  • Adding notes is good.
  • No task should ever be deleted - but chances are we don't actually wish to view tasks older than a month. We can, but we can hide them by default.
  • When a task is closed/completed it cannot be edited.
  • All tasks belong to their owner and are non-public.

So what I've got is a multi-user system which is essentially split into four behaviours: Adding a task, viewing open tasks, viewing closed tasks, and searching for tasks.

Tasks can be tagged, have notes added to them (but never deleted/edited) and over time closed tasks fade away (though they're never deleted).

Some of my constraints or assumptions might change over time, but so far I'm happy with them. (e.g. I can imagine tagging an entry "public" might make it appear visible to others.)

Anyway the code is (surprise) built using minimal perl & jquery and you can play with it:

The site contains a demo user which you can use. I don't much care if people wish to use it for real once it is more complete, but I expect that it will either be ignored completely or be the kind of thing you wish to self-host.

With that in mind the code is currently closed, but I'll add it to my mercurial repository soon. (Maybe even tonight!)

ObSubject: Dog Soldiers

| 8 comments

 

Not even if you let me video tape it.

10 July 2009 21:50

The online todo list seems popular, or rather a lot of people logged in with the posted details and created/resolved tasks at least.

It is kinda cute to watch multiple people all using a site with only one set of login credentials - I guess this is a special case because you cannot easily delete things. Still I wonder how other sites would work like that? Would visitors self-organise, some trashing things, and others repairing damage? (I guess we could consider wikipedia a good example.)

Anyway I've spent a little while this morning and this lunchtime adding in the missing features and doing things suggested by users.

So now:

  • "Duration" is shown for both open & completed tasks.
  • The "home" page is removed as it added no value.
  • Tasks may be flagged as urgent.
  • *Tasks which have titles beginning with "*" are urgent by default).
  • Searching works across tags, notes, and titles.
  • Tag name completion is pending.

I think in terms of features that I'm done. I just need to wire up creation of accounts, and the submission of tasks via email. Then un-fuck my actual code.

I guess as a final thing I need to consider email notices. I deliberately do not support or mandate "due dates" for tasks. I think I prefer the idea of an email alert beign sent if a task is marked as urgent and has had no activity in the past 24 hours. (Where activity means "new note". e.g. you'd add "still working on this", or similar to cancel the pending alert)

Sending alerts via twitter could also be an option, although I still mostly abhor it.

I've had a brief look at both tadalist.com and rememberthemilk.com both seem nice .. but I'm still not sure on a winner yet.

ObFilm: Chasing Amy

| 4 comments

 

I didn't make a statement. I asked a question. Would you like me to ask it again?

18 July 2009 21:50

On my own personal machines I find the use of logfiles invaluable when installing new services but otherwise I generally ignore the bulk of the data. (I read one mail from each machine containing a summary of the days events.)

When you're looking after a large network having access to logfiles is very useful. It lets you see interesting things - if you take the time to look and if you have an idea of what you want to find out.

So, what do people do with their syslog data? How do you use it?

In many ways the problem of processing the data is that you have two conflicting goals:

  • You most likely want to have access to all logfiles recorded.
  • You want to pick out "unusual" and "important" data.

While it is the case you can easily find unique messages (given a history of all prior entries) it becomes a challenge to allow useful searches given the volume of data.

Consider a network of 100 machines. Syslog data for a single host can easily exceed 1,000,000 lines in a single day. (The total number of lines written beneath /var/log/ on the machine hosting www.debian-administration.org was 542,707 for the previous 24 hours. It is not a particularly busy machine, and the Apache logs were excluded.)

Right now I've got a couple of syslog-ng servers which simply accept all incoming messages from a large network and filter them briefly. The remaining messages are inserted into mysql via a FIFO. This approach is not very scalable and results in a table having millions of rows which is not pleasant to search.

I'm in the process of coming up with a replacement system - but at the same time I suspect that any real solution will depend a lot on what is useful to pull out.

On the one hand having unique messages only makes spotting new things easy. On the other hand if you start filtering out too much you lose detail. e.g. If you took 10 lines like this and removed all but one you lose important details about the number of attacks you've had:

  • Refused connect from 2001:0:53aa:64c:c9a:xx:xx:xx

Obviously you could come up with a database schema that had something like "count,message" and other host-tables which showed you where. The point I'm trying to make is that naive folding can mean you miss the fact that user admin@1.2.3.4 tried to login to host A, host B, and host C..

I'm rambling now, but I guess the point I'm trying to make is that depending on what you care about your optimisations will differ - and until you've done it you probably don't know what you want or need to keep, or how to organise it.

I hacked up a simple syslog server which accepts messages on port 514 via UDP and writes them to a FIFO /tmp/sys.log. I'm now using that pipe to read messages from a perl client and write them to local logfiles - so that I can see the kind of messages that I can filter, collapse, or ignore.

Its interesting the spread of severity. Things like NOTICE, INFO, and DEBUG can probably be ignored and just never examined .. but maybe, just maybe there is the odd deamon that writes interesting things with them..? Fun challenge.

Currently I'm producing files like this:

/var/log/skxlog/DEBUG.user.cron.log
/var/log/skxlog/ALERT.user.cron.log
/var/log/skxlog/INFO.authpriv.sshd.log

The intention is to get a reasonably good understanding of which facilities, priorities, and programs are the biggest loggers. After that I can actually decide how to proceed.

Remember I said you might just ignore INFO severities? If you do that you miss:

IP=127.0.0.1
Severity:INFO
Facility:authpriv
DATE:Jul 18 15:06:28
PROGRAM:sshd
PID:18770
MSG:Failed password for invalid user rootf from 127.0.0.1 port 53270 ssh2

ObFilm: From Dusk Til Dawn

| 9 comments

 

I'm getting married, I'm not joining a convent!

21 July 2009 21:50

(This post was accidentally made live before it was completed; it is now complet.)

I'll keep this brief and to the point.

syslog indexing and searching

Jason Hedden suggested using swish-e to index and then search syslog files which are stored on disk - rather than inserting the log entries into mysql.

I have 120+ machines writing to a central server, and running a search of 'sshd.*refused' takes less than a second to complete now.

(To be fair using php-syslog-ng was fast, it was just ugly, hard to manage, and the mysql database got overloaded)

Cloud Storage .. but on my machines

I've become increasingly interested in both centralised hosting, and reliable backups.

Cloud storage, where I contrl all the nodes, allows good backups.

So far I've experimented with both mogilefs and peerfuse. Neither setup is entirely appropriate for me, but I love the idea of seamless replication.

ice-creams

Many ice-creams bought in supermarkets come in packs of three. Annoying:

  • One for madam x.
  • one for me.
  • Who gets the spare? (Me, when she's gone ;)

It happens too often to be a coincidence: my cynicism wonders if it is designed to ensure people buy two boxes..?

new software releases

skxlist, the simple mailing list manager, got a couple of new options after user-submitted suggestions.

asql got a bugfix.

My todolist code is now running on at least one other site!

Nothing else much to say mostly because I'm suffering from poor sleep at the moment. In part because I've got a new clock on my bedroom windowsill and the ticking is distracting me (not to mention the on-the-hour chime!)

Still I'm sure it will pass. I grew up in York in a house that had the back yard abutting the local convent. Every hour, on the hour, they'd have bell ring! We moved house when I was about 11, but for months after the move I'd still wake up at midnight confused that the bells hadn't rung...

ObFilm: Mamma Mia!

| 5 comments

 

You tortured me? You tortured me!

30 July 2009 21:50

DNS is hard, let's go shopping.

<irony>

CNAME & MX records do not mix.

</irony>

ObFilm: V for Vendetta.

| No comments

 

I'm begging you, take a shot. Just one hit.

31 July 2009 21:50

A while back I mentioned my interest in clustered storage solutions. I only mentioned this in passing because I was wary of getting bogged down in details.

In short though I'd like the ability to seamlessly replicate content across machines which I own, maintain, and operate (I will refer to "storage machines" as "client nodes" to keep in the right theme!)

In clustering terms there are a few ways you can do this. One system which I've used in anger has been drbd. drbd basically allows you to shares block devices across a network. This approach has some pros and some cons.

The biggest problem with drbd is that it allows the block device to be mounted at only one location - so you cannot replicate in a live sense. The fact that it is block-based also has implications on the size of your available storage.

Moving back to a higher level I'm pretty much decided that when I say that I want "a replicated/distributed filesystem" that is precisely what I mean: The filesystem should be replicated, rather than having the contents split up and shared about in pieces. The distinction is pretty fine but considering the case of backups this is how you want it to work: Each client node has a complete backup of the data. Of course the downside to this is that that maximum storage available is the smallest of the available nodes.

(e.g. 4 machines with 200Gb of space, and 1 machine of 100gb - you cannot write more than 100Gb to the clustered filesystem in total or the replicated contents will be too large to fit on the smaller host.)

As a concrete example, given the tree:

/shared
/shared/Text/
/shared/Text/Articles/
/shared/Text/Crypted/

I'd expect that exact tree to be mirrored and readable as a filesystem on the client nodes. In practise I want a storage node to either be complete, or broken. I don't want to worry about failure cases when dealing with things like "A file must be present on 2+ nodes out of the 5 available".

Now the cheaty way of doing this is to work like this:

# modify files
# for i in client1 client2 client3 ; do \
    rsync -vazr --in-place --delete /shared $i:/backing-store ; \
  done

i.e. Just rsync after every change. The latency would be a killer, and the level of dedication would be super-human, but if you could wire that up in a filesystem you'd be golden. And this is why I've struggled with lustre, mogilefs, and tahoe. They provide both real and complicated clustering - but they do not give you a simple filesystem you can pickup and run with from any client node in an emergency. They're more low-level. (Probably more reliable, sexy and cool. I bet they get invited to parties and stuff ;)

Thankfully somebody has written exactly what I need: chironfs. This allows you to spread writes to a single directory over multiple storage directories - each of which gets a complete copy.

This is the first example from the documentation :

# make client directories and virtual one
mkdir /client1 /client2 /real

# now do the magic
chironfs --fuseoptions allow_other /client1=/client2 /real

Once you've done this you can experiment:

skx@gold:/real$ touch x
skx@gold:/real$ ls /client1/
x
skx@gold:/real$ ls /client2/
x
skx@gold:/real$ touch fdfsf

skx@gold:/real$ ls /client1
fdfsf  x
skx@gold:/real$ ls /client2
fdfsf  x

What this demonstrates is that running "touch foo" in the "/real" directory resulted in that same operation being carried out in both the directory /client1 and "/client2".

This is looking perfect, but wait it isn't distributed?!

Thats the beauty of this system if /client1 & /client2 are NFS-mounted it works! Just the way you imagine it should!

Because I'm somewhat wary of fuse at the moment I've set this up at home - with a "/backups" directory replicated to an LVM filesystem locally and also to a remote host which is mounted via SSH.

Assuming it doesn't go mad, lose data, or fail badly if the ssh-based mount disappears I'm going to be very very happy.

So far it is looking good though, I can see entrie logged like this when I unmount behind its back:

2009/07/30 23:28 mknod failed accessing /client2 No such file or directory
2009/07/30 23:28 disabling replica failed accessing /client2

So a decent alarm on that would make it obvious when replication was failing - just like the emails that mdadm sends out a semi-regular cron-check would probably be sufficient.

Anyway enough advertising! So far I love it and I think that if it works out for me in anger that chironfs should be part of Debian. Partly because I'll want to use it more, and partly because it just plain rocks!

ObFilm: The Breakfast Club

| 13 comments