About Archive Tags RSS Feed

 

I saw green fields and flowers. I could smell the grass.

20 January 2009 21:50

Fabio Tranchitella recently posted about his new filesystem which really reminded me of an outstanding problem I have.

I do some email filtering, and that is setup in a nice distributed fashion. I have a web/db machine, and then I have a number of MX machines which process incoming mail rejecting spam and queuing good mail for delivery.

I try not to talk about it very often, because that just smells of marketting. More users would be good, but I find explicit promotion & advertising distasteful. (It helps to genuinly consider users as users, and not customers even though money changes hands.)

Anyway I handle mail for just over 150 domains (some domains will receive 40,000 emails a day others will receive 10 emails a week) and each of these domains has different settings, such as "is virus scanning enabled?" and "which are the valid localparts at this domain?", then there are whitelists, blacklists, all that good stuff.

The user is encouraged to fiddle with their settings via the web/db/master machine - but ultimately any settings actually applied and used upon the MX boxes. This was initially achieved by having MySQL database slaves, but eventually I settled upon a simpler and more robust scheme: Using the filesystem. (Many reasons why, but perhaps the simplest justification is that this way things continue to work even if the master machine goes offline, or there are network routing issues. Each MX machine is essentially standalone and doesn't need to be always talking to the master host. This is good.)

On the master each domain has settings beneath /srv. Changes are applied to the files there, and to make the settings live on the slave MX boxes I can merely rsync the contents over.

Here's an anonymized example of a settings hierarchy:

/srv/foo.com/
|-- basics
|   `-- enabled
|-- dnsbl
|   |-- action
|   `-- zones
|       |-- foo.example.com
|       `-- bar.spam-house.com
|-- language
|   `-- english-only
|-- mx
|-- quarantine
|   `-- admin_._admin
|-- spam
|   |-- action
|   |-- enabled
|   `-- text
|-- spamtraps
|   |-- anonymous
|   `-- bobby
|-- uribl
|   |-- action
|   |-- enabled
|   `-- text
|-- users
|   |-- bob
|   |-- root
|   |-- simon
|   |-- smith
|   |-- steve
|   `-- wildcard
|-- virus
|   |-- action
|   |-- enabled
|   `-- text
`-- whitelisted
    |-- enabled
    |-- hosts
    |-- subjects
    |   `-- [blah]
    |-- recipients
    |   `-- simon
    `-- senders
        |-- root@steve.orgy
        |-- @someisp.com
        `-- foo@bar.com

So a user makes a change on the web machine. That updates /srv on the master machine immediately - and then every fifteen minutes, or so, the settigngs are pushed accross to the MX boxes where the incoming mail is actually processed.

Now ideally I want the updates to be applied immediately. That means I should look at using sshfs or similar. But also as a matter of policy I want to keep things reliable. If the main box dies I don't want the machines to suddenly cease working. So that rules out remotely mounting via sshfs, nfs or similar.

Thus far I've not really looked at the possabilities, but I'm leaning towards having each MX machine look for settings in two places:

  • Look for "live" copies in /srv/
  • If that isn't available then fall back to reading settings from /backup/

That way I can rsync to /backup on a fixed schedule, but expect that in everyday operation I'll get current/live settings from /srv via NFS, sshfs, or something similar.

My job for the weekend is to look around and see what filesystems are available and look at testing them.

Obmovie:Alive

| 9 comments

 

Comments on this entry

icon Matt Simmons at 15:36 on 20 January 2009
A cluster would be ideal, but there's a lot of overhead for setting something like that up. Even something like Lustre might be a lot of work.
Have you considered something like Coda? It seems pretty stable, and sounds like it might do what you're wanting.
icon Steve Kemp at 15:45 on 20 January 2009

Thanks for the pointers! At this point I've tried nothing, jumping from a two-node MySQL master-slave setup to a filesystem rsync based method when performance started to suffer, and to avoid issues if the primary system went down.

I think that using a proper cluster/replicated file-system is going to be a perfect fit - it is just a matter of examining a few of the popular ones and working out how they cope in different failure cases.

The current setup is great from a redundancy point of view - if the link between master and MXs goes down there is no problem, and adding extra nodes is trivial.

The only issue is the time-lag of updates. I could solve that by having the rsyncs run more frequently, or be triggered on demand when changes have been made.

I don't know enough about the distributed/replicated/cluster filesystems to know if they would be a better alternative to my manual approach, but my suspicion is that they would be a better way to go. I just have to play, experiment, and learn more.

icon Jan Hudec at 20:06 on 20 January 2009
For any data that is updated less frequently than accessed, triggering the synchronization upon change from the master machine requires least network traffic while simultaneously providing smallest latency for the slaves (provided you have enough storage too, which you seem to). I don't know whether any replicated filesystem implements it, but suspect setting up triggered rsync to be easier anyway.
icon Roberto at 12:09 on 21 January 2009
Did you try/think about using something like cfengine or puppet? They're not as simple as rsync, but are much like what you seem to be after.
icon Steve Kemp at 12:16 on 21 January 2009

I do use cfengine already, but these settings are changed by users, via their browser, so I'm not sure I see how CFEngine/Puppet could be used to help these settings propogate?

Right now I've updated the code so that if a setting is changed /srv/dirty is created - and there is a cronjob that runs every three minutes to perform a sync if that flag is present.


icon Dan Callahan at 13:00 on 21 January 2009
Would it be possible / reasonable to put the data in a VCS repo, and set a commit hook that fired off rsync of a few pushes?
Especially given that the settings are changed by users -- this gives you a nice built-in rollback in case any problems arise.
icon sytoka at 23:02 on 21 January 2009
inotify, dnotify ?
When your master file change -> rsync all change to the other server via inotify action.

icon Roberto at 12:26 on 22 January 2009
WRT propagating changes with cfengine/puppet: I don't remember how to do it in cfengine (I use it, but not fully), but in puppet (which I don't use too much either; I'm learning about it and testing) you can "subscribe" to one file and act when it changes. If you already are dumping the users' settings to files, you could retrieve them with puppet (usually it checks every 30 seconds) and do whatever you want if any of the files change. Sorry if I sound like proselytizing. It's just that your setting seems to fit the usual scenario for using puppet, and I'm too geeky and vain to avoid commenting about it :-)
icon Steve Kemp at 22:03 on 22 January 2009

If I were pushing updates to N hosts from my desktop, or similar, I could see Puppet being useful. Indeed already my hosts are mostly CFEngine controlled.

But so far I'm thinking that that is overkill and not a useful way to go forward.

For the moment I've updated the code to create a /srv/dirty flag which can be polled pretty quickly to transfer settings on a faster schedule.

I think that Dan Callahan's notion of using a revision control system and triggering that is perfect for me, and I'm going to work towards that over the coming weekend.

It gives me a lot of useful things, as you'd expect from a revision control system, and it should be no worse than using rsync in the case there is a network link issue.