|
Entries tagged slaughter
17 January 2010 21:50
So I previously mentioned I'd knocked up a simple automation tool, for deploying policies (read "scripts") from a central location to a number of distinct machines.
There seemed to be a small amount of interest, so I've written it all up:
- slaughter - Perl System Administration & Automation tool
Why slaughter? I have no idea. Yesterday evening it made sense, somehow, on the basis it rhymed with auto - (auto as in automation). This morning it made less sense. But meh.
This list of primitives has grown a little and the brief examples probably provide a little bit of flavour.
In short you:
- Install the package upon a client you wish to manage.
- When "slaughter" is invoked it will fetch http://example.com/slaughter/default.policy
- This file may include other policy files via "IncludePolicy" statements.
- Once all the named policies have been downloaded/expanded they'll be written to a local file.
- The local file will have Perl-fu wrapped around it such that the Slaughter::linux module is available
- This is where the definitions for "FetchFile", "Mounts", etc are located.
- The local file will be executed then removed.
All in all its probably more complex than it needs to be, but I've managed to get interesting things primarily with these new built-in primitives and none of it is massively Debian, or even Linux, specific.
ObSubject: Jaws
Tags: automation, slaughter
|
24 January 2010 21:50
Slaughter
I received my first patch to slaughter today, which made me happy.
(I've made a new release including it, and updated the list of primitives to actually document the file-deletion facilities which previously I'd omitted to avoid encouraging mass-breakage.)
Signing Binaries
Andrew Pollock mentions that the days of elfsign might be numbered.
This is a shame because I've always liked the idea of signing binaries. Once upon a time, in the 2.4.x days, I wrote a kernel patch which would refuse to execute non-signed binaries. (This was mostly a waste of time; since it denied the execution of shell scripts. Which meant that the system init scripts mostly failed. My solution in the end was to only modprobe my module once the system was up and running, and hope for the best ...)
Right now, having performed only a quick search, I don't see anything like that at the moment.
- elfsign will let you store a binaries MD5 hash.
- bsign will let you sign a binary with a GPG key.
But where is the kernel patch to only execute such hashed/signed binaries, preventing the execution of random shell scripts and potentially trojaned binaries?
Without that I think signing binaries is a crazyish thing to do. Sure you can test that a file hasn't been modified, but even without those tools you can do the same thing via md5sums.
(ObRandom: Clearly if you mass-modify all your binaries the system md5sums database will be trashed.)
Perl UTF
I've received a bug report against chronicle, my blog compiler.
It seems that some versions of perl fail to run this:
#
# Run the command, reading stdout.
#
open( FILTER, "$cmd|;utf8" ) or
die "Failed to run filter: $!";
Removing the ;utf8 filter allows things to work, but will trash any UTF-8 characters from the output - so that's a nasty solution.
I'm not sure what the sane solution is here, so I'm going to sit on it for a few days and continue to write test scripts.
ObSubject: 300
Tags: bsign, chronicle, md5sum, slaughter
|
18 April 2010 21:50
This past week has had a couple of minor software releases:
- chronicle
I made a new release which improves support for foreign language; so dates can be internationalised, etc.
The online demos now include one in with French month names.
- slaughter
The perl-based sysadmin tool had a minor update earlier today, after it was pointed out that I didn't correctly cope with file content checks.
I'm still pretty pleased with the way this works out, even if it is intentionally simple.
- milli
This is a simple bug-record-thingy which I was playing with recently, and I've now started using to record bugs in other projects.
I'll pretend its a fancy distributed-bug-tracker, but actually it isn't. It's nothing more than a bunch of text-files associated with a project, which have sufficiently random names that collisions are unlikely and which thus becomes semi-distributed-friendly.
Today I'll be learning to love Javascript a little more. I want to use the gallerific image gallery - but it doesn't make thumbnails automatically - which galleria does.
I need to either come up with my own which looks like galleriffic, or port the thumbnail bits over.
(I'm currently using a slightly modified version of gallerifific for my people-shots.)
ObFilm: Hancock
Tags: chronicle, galleria, gallerific, javascript, jquery, slaughter
|
13 October 2010 21:50
Tonight I was having some connectivity issues, so after much diagnostic time and pain I decided to reboot my router. At the moment my home router came back my (external) IP address changed, and suddenly I found I could no longer login to my main site.
Happily however I have serial console access, and I updated things such that my new IP address was included in the hosts.allow file. [*]
The next step was to push that change round my other boxes, and happily I have my own tool slaughter which allows me to make such global changes in a client-pulled fashion. 60 minutes later cron did its magic and I was back.
This reminds me that I let the slaughter tool stagnate. Mostly because I only use it to cover my three remote boxes and my desktop, and although I received one bug report (+fix!) I never heard of anybody else using it.
I continue to use and like CFEngine at work. Puppet & Chef have been well argued against elsewhere, and I'm still to investigate BFG2 + FAI.
Mostly I'm happy with slaughter. My policies are simple, readable, and intuitive. Learn perl? Learn the "CopyFile" and you're done. For example.
By contrast the notion of state machines, functional operations, and similar seem over-engineered in other tools. Perhaps thats my bug, perhaps that's just the way things are - but the rants linked to above makes sense to me and I find myself agreeing 100%.
Anyway; slaughter? What I want to do is rework it such that all policies are served via rsync and not via HTTP. Other changes, such as the addition of new primitives, don't actually seem necessary. But serving content via rsync just seems like the right way to go. (The main benefit is recursive copies of files become trivial.)
I'd also add the ability to mandate GPG-signatures on policies, but that's possible even now. The only step backwards I see is that currently I can serve content over SSL, but that should be fixable even if via stunnel.
*
My /etc/hosts.allow file contains this:
ALL: 127.0.0.1
ALL: /etc/hosts.allow.trusted
ALL: /etc/hosts.allow.trusted.apache
Then hosts.allow.trusted contains:
# www.steve.org.uk
80.68.85.46
# www.debian-administration.org
80.68.80.176
# my home.
82.41.x.x
I've never seen anybody describe something similar, though to be fair it is documented. To me it just seems clean to limit the IPs in a single place.
To conclude hosts.allow.trusted.apache is owned by root.www-data, and can be updated via a simple CGI script - which allows me to add a single IP address on the fly for the next 60 minutes. Neat.
ObQuote: Tony is a little boy that lives in my mouth. - The Shining
Tags: adsl, libwrap, rsync, slaughter, todo
|
1 November 2011 21:50
There are many system administration and configuration management tools available, I've mentioned them in the past and we're probably all familiar with our pet favourites.
The "biggies" include CFEngine, Puppet, Chef, BFG2. The "minis" are largely in-house tools, or abuses of existing software such as fabric.
My own personal solution manages my home network, and three dedicated servers I pay for in various ways.
Currently I've been setting up some configuration "stuff" for a friend and I've elected to manage some of the setup with this system of my own, and I guess I need to decide what I'm going to do going forward.
slaughter is well maintained, largely by virtue of not doing too much. The things it does are genuinely useful and entirely sufficient to handle a lot of the common tasks - and because the server-side requirement is a HTTP server, and the only client-side requirement is CRON it is trivial to deploy.
In the past I've thought of three alternatives that would make it more complex:
- Stop using HTTP and have a mini-daemon to both serve and schedule.
- Stop using HTTP and use rsync instead.
- Rewrite it in Javascript. (Yes, really).
Each approaches have their appeal. I like the idea of only executing GPG-signed policies, and that would be trivial if there was a real server in place. It could also use SSL because that's all you need for security (ha!).
On the other hand using rsync allows me to trivially implement the only missing primitive I actually miss at times - the ability to recursively download and install a remote directory tree. (I solve this problem by downloading a .tar file and unpacking it. Not good. Doesn't cope with template expansion and is fiddlier than I like).
In the lifetime of the project I think I've had 20-50 feature requests or comments, which suggests it might actually be used by 50-100 people. (Ha! Optimism)
In the meantime I'll keep a careful eye on the number of people who download the tarball & the binary packages...
ObQuote: "I have vermin to kill. " - Kill Bill
Tags: perl, slaughter, sysadmin
|
26 November 2011 21:50
Recently I said that my perl-based sysadmin
tool, Slaughter, was at the cross-roads. I wasn't sure if I should
leave it alone, or update it somehow.
As I'm generally lazy and busy (yes it is possible to be both simultaneously!) I didn't do anything.
But happily earlier in the week I received a bunch of updates from Jean Baptiste which implemented support for managing Windows clients, via Strawberry Perl.
So I guess the conclusion is: Do nothing. Change nothing. Just fix any issues which are reported to me, and leave it as-is. (I did a little more than that, refactoring to avoid duplication and improve "neatness".)
As I said at the time I've had some interesting feedback, suggestions and bugfixes from people over the past year or so - so I shouldn't be surprised to learn I'm not the only person using it.
ObQuote: "Oh, yes, a big cat! My salvation depends upon it! " - Dracula (1992)
Tags: slaughter, strawberry, windows
|
10 March 2012 21:50
Recently I accidentally flooded Planet Debian with my blog feed. This was an accident caused by some of my older blog entries not having valid "Date:" headers. (I use chronicle which parses simple text files to build a blog, and if there is no Date: header present in entries it uses the CTIME of the file(s).)
So why did my CTIMEs get lost? Short version I had a drive failure and a PSU failure which lead to me rebuilding a few things and cloning a fresh copy of my blog to ~/hg/blog/.
My host is now once again OK, but during the pain the on-board sound started to die. Horribly crackly and sounding bad. I figure the PSU might have caused some collateral damage, but so far thats the only sign I see.
I disabled the on-board sound and ordered a cheap USB sound device which now provides me with perfect sound under the Squeeze release of Debian GNU/Linux.
In the past I've ranted about GNU/Linux sound. So I think it is only fair to say this time things worked perfectly - I plugged in the device, it was visible in the output of dmesg, and /proc/asound/cards and suddenly everything just worked. Playing music (mpd + sonata) worked immediately, and when I decided to try playing a movie with xine just for fun sound was mixed appropriately - such that I could hear both "song" + "movie" at the same time. Woo.
(I'm not sure if I should try using pulse-audio, or similar black magic. Right now I've just got ALSA running.)
Anyway as part of the re-deployment of my desktop I generated and pass-phrased a new SSH key, and then deployed that with my slaughter tool. My various websites all run under their own UID on my remote host, and a reverse-proxy redirects connections. So far example I have a Unix "s-stolen" user for the site stolen-souls.com, a s-tasteful user for the site tasteful.xxx, etc. (Right now I cannot remember why I gave each "webserver user" an "s-" prefix, but it made sense at the time!)
Anyway once I'd fixed up SSH keys I went on a spree of tidying up and built a bunch of meta-packages to make it a little more straightforward to re-deploy hosts in the future. I'm quite pleased with the way those turned out to be useful.
Finally I decided to do something radical. I installed the bluetile window manager, which allows you to toggle between "tiling" and "normal" modes. This is my first foray into tiling window managers, but it seems to be going well. I've got the hang of resizing via the keyboard and tweaked a couple of virtual desktops so I can work well both at home and on my work machine. (I suspect I will eventually migrate to awesome, or similar, this is very much a deliberate "ease myself into it" step.)
ObQuote: "Being Swedish, the walk from the bathroom to her room didn't need to be a modest one. " - Cashback.
Tags: alsa, chronicle, random, slaughter, squeeze, usb
|
13 October 2012 21:50
Software
I've been using redis for a while now. It is a fast in-memory storage system which offers persistence (unlike memcached), as well as several primitive data-types such as lists & hashes.
Anyway it crossed my mind that I don't have a backup of the data it contains, so I knocked up a simple script to dump the contents in plain-text:
In other software-news I've had some interesting and useful feedback and made two new releases of my slaughter sysadmin tool - it now contains a wee test suite and more robustness.
Hardware
I received an email last night to say that my Raspberry PI has shipped. Ordered 24/05/2012, and dispatched 12/10/2012 - I'd almost forgotten about it.
My plan is to make it a media-serving machine, SNES emulator, or similar. Not 100% decided yet.
Finally I've taken the time to repaint my office. When I last wrote about working from home I didn't include pictures - I just described the process of using a "work computer" and a "personal computer".
So this is what my office used to look like. As you can see there are two machines and a huge desk.
With a few changes I now have an office which looks like this - the two machines are glued-together with a KVM. and I have much more room behind it for another desk, more books, and similar toys. Additionally my dedication is now enforced - I simply cannot play with both computer as the same time.
The chair was used to mount the picture - usually I sit on a kneeling chair, which is almost visible.
What inspired the painting? Partly the need for more space, but mostly water damage. I had a leaking ceiling. (Local people will know all about my horrible leaking roof situation).
The end?
Tags: pi, raspberry pi, redis, slaughter
|
24 October 2012 21:50
There have been a few interesting discussions going on in parallel
about my slaughter
sysadmin tool.
I've now decided there will be a 2.0 release, and that will
change things for the better. At the moment there are two main parts to the system:
- Downloading polices
These are instructions/perl code that are applied to the local host.
- Downloading files
Polices are allowed to download files. e.g. /etc/ssh/sshd_config
templates, etc.
Both these occur over HTTP fetches (SSL may be used), and there is a
different root for the two trees. For example you can see the two
public examples I have here:
A fetch of the policy "foo.policy" uses the first prefix, and a fetch
of the file "bar" uses the latter prefix. (In actual live usage I use a
restricted location because I figured I might end up storing sensitive
things, though I suspect I don't.)
The plan is to update the configuration file to read something like this:
transport = http
#
# Valid options will be
# rsync | http | git | mercurial | ftp
#
#
# each transport will have a different prefix
#
prefix = http://static.steve.org.uk/private
# for rsync:
# prefix=rsync.example.com::module/
#
# for ftp:
# prefix=ftp://ftp.example.com/pub/
#
# for git:
# prefix=git://github.com/user/repo.git
#
# for mercurial
# prefix=http://repo.example.com/path/to/repo
#
I anticipate that the HTTP transport will continue to work the way it
currently does. The other transports will clone/fetch the appropriate
resource recursively to a local directory - say
/var/cache/slaughter. So the complete archive of files/policies will be available locally.
The HTTP transport will continue to work the same way with regard to
file fetching, i.e. fetching them remotely on-demand. For all other transports the "remote" file being copied will be pulled from the local cache.
So assuming this:
transport = rsync
prefix = rsync.company.com::module/
Then the following policy will result in the expected action:
if ( UserExists( User => "skx" ) )
{
# copy
FetchFile(
Source => "/global-keys",
Dest => "/home/skx/.ssh/authorized_keys2",
Owner => "skx",
Group => "skx",
Mode => "600" );
}
The file "/global-keys" will refer to /var/cache/slaughter/global-keys which will have been already downloaded.
I see zero downside to this approach; it allows HTTP stuff to continue to work as it did before, and it allows more flexibility. We can benefit from knowing that the remote policies are untampered with, for example, via the checking built into git/mercurial, and the speed gains of rsync.
There will also be an optional verification stage. So the code will roughly go like this:
- 1. Fetch the policy using the specified transport.
- 2. (Optionally) run some local command to verify the local policies.
- 3. Execute policies.
I'm not anticipating additional changes, but I'm open to persuasion.
Tags: slaughter, todo
|
26 October 2012 21:50
Work on slaughter 2.x is going rather well.
The scripting hasn't changed, and no primitives have been altered to break backward
compatibility, but it is probably best to release this as
"slaughter2" - because the way to specify the source from which to pull
scripts has changed.
Previously we'd specify two arguments (or have them in a
configuration file):
- --server=example.com
- --prefix=/slaughter/
That would result in policies being downloaded from:
http://example.com/slaughter/
Now the rework is complete we use "transports" and "prefixes". The
new way to specify the old default is to run with:
--transport=http --prefix=http://example.com/slaughter/
I've implemented four transports thus far:
The code has been made considerably neater, the test-cases are
complete, and the POD/inline documentation is almost 100% complete.
Adding additional revision-controlled transports would be trivial at
this point - but I suspect I'd be wasting my time if I were to add CVS
support!
Life is good. Though I've still got a fair bit more documentation,
prettification and updates to make before I'm ready to release it.
Play along at home if you wish: via the
repository.
Tags: slaughter
|
6 December 2012 21:50
Some brief software updates:
- Custodian
This is the monitoring tool I wrote for Bytemark. It still rocks, and has run over 10 million tests without failure. I'd love more outside feedback, even if just to say "documentation needs work".
- Slaughter
This is my sysadmin tool for multiple hosts - consider it cfengine-lite, or cfengine-trivial more likely.
The 2.x release is finally out, and features pluggable transports. That means your central server can be running HTTP, RSYNC, FTP, or anything you like.
90% of the changes came from or were inspired by Csillag Tamas, to whom I owe a debt of thanks.
- Templer
A static-site generator, written in Perl.
I use this to generate blogspam.net, and other sites from simple layouts. Tutorial available online.
- redis-document-store
A trivial hack which allows using Redis as a schema-less document storage system.
Assuming you never delete documents it is simple, transparent, and already in live use for Debian Administration
Random Comment on Templer:
Although I've made extensive notes on common static site generators, and they will be discussed at length in the near future, I do want to highlight one problem common to 90% of them: Symbolic links.
For example webgen fails my simple test:
~/hg/websites$ webgen create test.example.com
~/hg/websites$ cd test.example.com/src/
~/hg/websites/test.example.com/src$ mkdir jquery-1.2.3
~/hg/websites/test.example.com/src$ touch jquery-1.2.3/jquery.js
~/hg/websites/test.example.com/src$ ln -s jquery-1.2.3 jquery
~/hg/websites/test.example.com$ webgen
Starting webgen...
...
Finished
~/hg/websites/test.example.com$ ls out/ | grep jq
jquery-1.2.3
Here we see creating a symlink to a directory has not produced a matching symlink in the output. Something I use frequently. for example.
Some tools mangled symlinked directories, or files, some ignore them completely. Neither is acceptible.
Tags: custodian, slaughter, static, templer
|
29 December 2012 21:50
A couple of days ago I made a new release of slaughter, to add a new primitive I was sorely missing:
if ( 1 != IdenticalContents( File1 => "/etc/foo" ,
File2 => "/etc/bar" ) )
{
# do something because the file contents differ
}
This allows me to stop blindly over-writing files if they are identical already.
As part of that work I figured I should be more "visible", so on that basis I've done two things:
After sanity-checking my policies I'm confident I'm not leaking anything I wish to keep private - but there is some news being disclosed ;)
Now that is done I think there shouldn't be any major slaughter-changes for the forseeable future; I'm managing about ten hosts with it now, and being perl it suits my needs. The transport system is flexible enough to suit most folk, and there are adequate facilities for making local additions without touching the core so if people do want to do new things they don't need me to make changes - hopefully.
ObQuote: "Yippee-ki-yay" - Die Hard, the ultimate Christmas film.
Tags: github, slaughter
|
2 February 2013 21:50
It was interesting to read recently from Martin F. Krafft a botnet-like configuration management proposal.
Professionally I've used CFEngine, which in version 2.x, supported a bare minimum of primitives, along with a distribution systme to control access to a central server. Using thse minimal primitives you could do almost anything:
- Copy files, and restart services that depend upon them.
- Make minor edits to files. (Appending lines not present, replacing lines you no longer wanted, etc)
- Installing / Removing packages.
- More ..
Now I have my mini cluster (and even before that when I had 3-5 machines) it was time to look around for something for myself.
I didn't like the overhead of puppet, and many of the other systems. Similarly I didn't want to mess around with weird configuration systems. From CFEngine I'd learned that using only a few simple primitives would be sufficient to manage many machines provided you could wrap them in a real language - for control flow, loops, conditionals, etc. What more natural choice was there than perl, the sysadmin army-knife?
To that end slaughter was born:
- Download polices (i.e. rules) to apply from a central machine using nothing more complex than HTTP.
- Entirely client-driven, and scheduled via cron.
Over time it evolved so that HTTP wasn't the only transport. Now you can fetch your policies, and the files you might serve, via git, hg, rsync, http, and more.
Today I've added one final addition, and now it is possible to distribute "modules" alongside policies and files. Modules are nothing more than perl modules, so they can be as portable as you are careful.
I envisage writing a couple of sample modules; for example one allowing you to list available sites in Apache, disable the live ones, enable/disable mod_rewrite, etc.
These modules will be decoupled from the policies, and will thus be shareable.
Anyway , I'm always curious to learn about configuration management systems but I think that even though I've reinvented the wheel I've done so usefully. The DSL that other systems use can be fiddly and annoying - using a real language at the core of the system seems like a good win.
There are systems layered upon SSH, such as fabric, ansible, etc, and that was almost a route I went down - but ultimately I prefer the notion of client-pull to server-push, although it is possible in the future we'll launche a mini-daemon to allow a central host/hosts to initial a run.
Tags: cfengine, chef, puppet, slaughter
|
7 February 2013 21:50
Tonight I've made a new release of my slaughter automation tool.
Recent emails lead me to believe I've now got two more users, so I hope they appreciate this:
That covers installation, setup, usage, and more. Took a while to write, but I actually enjoyed it. I'm sure further additions will be made going forward. Until them I'm going to call it a night and enjoy some delicious cake.
Tags: documentation, slaughter
|
14 May 2013 21:50
Today my main machine was down for about 8 hours. Oops.
That meant when I got home, after a long and dull train journey, I received a bunch of mails from various hosts each saying:
- Failed to fetch slaughter policies from rsync://www.steve.org.uk/slaughter
Slaughter is my sysadmin utility which pulls policies/recipies from a central location and applies them to the local host.
Slaughter has a bunch of different transports, which are the means by which policies and files are transferred from the remote "central host" to the local machine. Since git is supported I've now switched my policies to be fetched from the master github repository.
This means:
- All my servers need git installed. Which was already the case.
- I can run one less service on my main box.
- We now have a contest: Is my box more reliable than github?
In other news I've fettled with lumail a bit this week, but I'm basically doing nothing until I've pondered my way out of the hole I've dug myself into.
Like mutt lumail has the notion of "limiting" the display of things:
- Show all maildirs.
- Show all maildirs with new mail in them.
- Show all maildirs that match a pattern.
- Show all messages in the currently selected folder(s)
- More than one folder may be selected :)
- Shall all unread messages in the currently selected folder(s).
Unfortunately the latter has caused an annoying, and anticipated, failure case. If you open a folder and cause it to only show unread messages all looks good. Until you read a message. At which point it is no longer allowed to be displayed, so it disappears. Since you were reading a message the next one is opened instead. WHich then becomes marked as read, and no longer should be displayed, because we've said "show me new/unread-only messages please".
The net result is if you show only unread messages and make the mistake of reading one .. you quickly cycle through reading all of them, and are left with an empty display. As each message in turn is opened, read, and marked as non-new.
There are solutions, one of which I documented on the issue. But this has a bad side-effect that message navigation is suddenly complicated in ways that are annoying.
For the moment I'm mulling the problem over and I will only make trivial cleanup changes until I've got my head back in the game and a good solution that won't cause me more pain.
Tags: github, lumail, slaughter, sysadmin
|
|