|
Entries tagged automation
16 January 2010 21:50
I've talked before about the minimal way in which I've been using a lot of the available automation tools. I tend to use them to carry out only a few operations:
- Fetch a file from a remote source.
- If this has changed run some action.
- Ensure a package is installed.
- If this is carried out run some action.
- Run a command on some simple criterion.
- E.g. Every day at 11pm run a mirror.
In the pub I've had more than a few chats about how to parse a mini-language and carry these operations out, and what facilities other people use. It'd be almost trivial to come up with a mini-language, but the conclusion has always been that such mini-languages aren't expressive enough to give you the arbitrary flexibility some people would desire. (Nested conditionals and the ability to do things on a per-host, per-day, per-arch basis for example.)
It struck me last night that you could instead cheat. Why not run scripting langues directly on your client nodes? Assume you could write your automation in Ruby or Perl and all you need to do is define a few additional primitives.
For example:
#
# /policies/default.policy - the file that all clients nodes poll.
#
#
# Fetch the per-node policy if it exists.
#
FetchPolicy $hostname.policy ;
#
# Ensure SSH is OK
#
FetchPolicy ssh-server.policy ;
#
# Or explicitly specify the URL:
#
# FetchPolicy http://example.com/policies/ssh-server.policy ;
#
# Finally a quick fetch of a remote file.
#
if ( FetchFile(
Source => "/etc/motd",
Dest => "/etc/motd",
Owner => "root",
Group => "root",
Mode => "0644" ) )
{
RunCommand( "id" );
}
This default policy attempts to include some other policies which are essentially perl files which have some additional "admin-esque" primitives. Such as "InstallPackage", "PurgePackage", and "FetchFile".
FetchFile is the only one I've fully implemented, but given a server it will fetch http://server/prefix/files/$FILENAME - into a local file, and will setup the owner/gid/mode. If the fetch succeeded and contents differ from the current contents of the named file (or the current file doesn't exist) it will be moved into place and the function will return true.
On the server side I just have a layout that makes sense:
.
|-- files
| `-- etc
| |-- motd
| |-- motd.silver.my.flat
| `-- motd.gold
`-- policies
|-- default.policy
|-- ssh-server.policy
`-- steve.policy
Here FetchFile has been implemented to first request /files/etc/motd.gold.my.flat, then /files/etc/motd.gold, and finally the global file /files/etc/motd.
In short you don't want to be forced to write perl which would run things like this:
# install ssh
if ( -e "/etc/apt/sources.list" )
{
# we're probably debian
system( "apt-get update" );
system( "apt-get install openssh-server" );
}
You just want to be able to say "Install Package foo", and rely upon the helper library / primitives being implemented correctly enough to be able to have that work.
I'll probably stop there, but it has given me a fair amount to think about. Not least of which : What are the minimum required primitives to usefully automate client nodes?
ObFilm: Moulin Rouge!
Tags: automation, cfengine, puppet
|
17 January 2010 21:50
So I previously mentioned I'd knocked up a simple automation tool, for deploying policies (read "scripts") from a central location to a number of distinct machines.
There seemed to be a small amount of interest, so I've written it all up:
- slaughter - Perl System Administration & Automation tool
Why slaughter? I have no idea. Yesterday evening it made sense, somehow, on the basis it rhymed with auto - (auto as in automation). This morning it made less sense. But meh.
This list of primitives has grown a little and the brief examples probably provide a little bit of flavour.
In short you:
- Install the package upon a client you wish to manage.
- When "slaughter" is invoked it will fetch http://example.com/slaughter/default.policy
- This file may include other policy files via "IncludePolicy" statements.
- Once all the named policies have been downloaded/expanded they'll be written to a local file.
- The local file will have Perl-fu wrapped around it such that the Slaughter::linux module is available
- This is where the definitions for "FetchFile", "Mounts", etc are located.
- The local file will be executed then removed.
All in all its probably more complex than it needs to be, but I've managed to get interesting things primarily with these new built-in primitives and none of it is massively Debian, or even Linux, specific.
ObSubject: Jaws
Tags: automation, slaughter
|
30 December 2015 21:50
In my old flat I had a couple of simple radio-controlled switches,
which allowed me to toggle power to a pair of standing lamps - one at
each side of the bed. This was very lazy, but also really handy and
I've always been curious about automation..
When it comes to automation there seems to be three main flavours:
- X10
The original standard, with stuff produced by many vendors and
good Linux support.
X10 supports two ways of sending/receiving commands - over
the electrical wiring, and over RF.
- Z-Wave
This is the newcomer, which despite that seems to be
well-supported and extensible. It allows "measurements" to be
sent/received in addition to the broadcast of events like "switch on",
and "switch off".
- Other systems - often lighting-centric
There are toy-things like the previously noted power-controlling
things, there are also stand-alone devices from people like
Philips with their philips hue
system, but given how Philips recently crippled their devices to disable
third-party bulbs I've no desire to use them.
One company caught my eye though, Osram make a
smart lightbulb and mini-hub to work with it.
So I bought one of the osram lightify systems, consisting of a magic box
and a pair of lightbulbs. The box connects to your wifi, and gets an
IP address. The IP address is then used by the application on your
mobile phone (i.e. the magic box does the magic, not the bulbs). The
phone application can be used to trigger "on", "off", "dim", "brighter", and the
various colour-changing commands, as you would expect.
You absolutely must use the phone-based application to do
the setup, but after that the whole point was that I could automate
things. I wanted to be able to setup my desktop computer to schedule
events, and started hacking.
I've written a simple Perl module to let me discover bulbs, and
turn them off and on. No doubt it'll be on CPAN in the near future,
once I can pick a suitable name for it:
$ ol --bridge=192.168.10.136 --list
hall MAC:8418260000d9c70c RGBW:255,255,255,255 STATE:On
kitchen MAC:8418260000cb433b RGBW:255,255,255,255 STATE:On
$ ol --bridge=192.168.10.136 --off=kitchen
$ ol --bridge=192.168.10.136 --list
hall MAC:8418260000d9c70c RGBW:255,255,255,255 STATE:On
kitchen MAC:8418260000cb433b RGBW:255,255,255,255 STATE:Off
The only niggle was the fiddly pairing, and the lack of any decent
documentation. The code I wrote was loosely based on the python
project python-lightify
written by Mikael Magnusson. Also worth noting that the
bridge/magic-box only exposes a single port so you can find the device
on your VLAN by nmapping for port 4000:
$ nmap -v 192.168.10.0/24 -p 4000
The device doesn't seem to allow any network setup at all - it only
uses DHCP. So you might want to make sure it gets assigned a stable
IP.
Anyway I'm going to bed. When I do so I'll turn the lights off
with my mobile phone. Neat.
In the future I will look at more complex automation, and I think
Z-wave is the way I'll go. Right now I'm in a rented flat so
replacing wall-switches, etc, is something I can't do. But the
systems I've looked at seem neat, and this current setup will keep me
amused for several months!
Tags: automation, lighting, osram lightify
|
12 October 2017 21:50
It feels like the past week or two has been very busy, and so I'm looking forward to my "holiday" next month.
I'm not really having a holiday of course, my wife is slowly returning to work, so I'll be taking a month of paternity leave, taking
sole care of Oiva for the month of November. He's still a little angel, and now that he's reached 10 months old he's starting to
get much more mobile - he's on the verge of walking, but not quite there yet. Mostly that means he wants you to hold his hands so
that he can stand up, swaying back and forth before the inevitable collapse.
Beyond spending most of my evenings taking care of him, from the moment I return from work to his bedtime (around 7:30PM), I've
made the Debian Administration website both read-only and much simpler. In the past that
site was powered by a lot of servers, I think around 11. Now it has only a small number of machines, which should slowly decrease.
I've ripped out the database host, the redis host, the events-server, the planet-machine, the email-box, etc. Now we have a much
simpler setup:
- Front-end machine
- Directly serves the code site
- Directly serves the SSL site which exists solely for Let's Encrypt
- Runs HAProxy to route the rest of the requests to the cluster.
- 4 x Apache servers
- Each one has a (read-only) MySQL database on it for the content.
- In case of future-compromise I removed all user passwords, and scrambled the email-addresses.
- I don't think there's a huge risk, but better safe than sorry.
- Each one runs the web-application.
- Which now caches each generated page to /tmp/x/x/x/x/$hash if it doesn't exist.
- If the request is cached it is served from that cache rather than dynamically.
Finally although I'm slowly making progress with "radio stuff" I've knocked up a simple hack which uses an ultrasonic sensor
to determine whether I'm sat in front of my (home) PC. If I am everything is good. If I'm absent the music is stopped and
the screen locked. Kinda neat.
(Simple ESP8266 device wired to the sensor. When the state changes a message is posted to Mosquitto, where a listener reacts
to the change(s).)
Oh, not final. I've also transfered my mobile phone from DNA.fi to MoiMobile. Which should complete soon, right now my phone is in limbo, active on niether service. Oops.
Tags: automation, esp8266
|
26 February 2019 12:01
Recently I heared that travis-CI had been
bought out, and later that
they'd started to fire their staff.
I've used Travis-CI for a few years now, via github, to automatically build
binaries for releases, and to run tests.
Since I was recently invited to try the Github Actions beta I figured it was time to experiment.
Github actions allow you to trigger "stuff" on "actions". Actions are things
like commits being pushed to your repository, new releases appearing, and so on. "Stuff" is basically "launch a specific docker container".
The specified docker container has a copy of your project repository cloned into it, and you can operate upon it pretty freely.
I created two actions (which basically means I authored two Dockerfiles), and setup the meta-information, so that now I can do what I used to do with travis easily:
- github-action-tester
- Allows tests to be run whenever a new commit is pushed to your repository.
- Or whenever a pull-request is submitted, or updated.
- github-actions-publish-binaries
- If you create a new release in the github UI your project is built, and the specified binaries are attached to the release.
Configuring these in the repository is very simple, you have to define a workflow at .github/main.workflow , and my projects tend to look very similar:
# pushes trigger the testsuite
workflow "Push Event" {
on = "push"
resolves = ["Test"]
}
# pull-requests trigger the testsuite
workflow "Pull Request" {
on = "pull_request"
resolves = ["Test"]
}
# releases trigger new binary artifacts
workflow "Handle Release" {
on = "release"
resolves = ["Upload"]
}
##
## The actions
##
##
## Run the test-cases, via .github/run-tests.sh
##
action "Test" {
uses = "skx/github-action-tester@master"
}
##
## Build the binaries, via .github/build, then upload them.
##
action "Upload" {
uses = "skx/github-action-publish-binaries@master"
args = "math-compiler-*"
secrets = ["GITHUB_TOKEN"]
}
In order to make the actions generic they both execute a shell-script inside your repository. For example the action to run the tests just executes
That way you can write the tests that make sense. For example a golang application would probably run go test ... , but a C-based system might run make test .
Similarly the release-making action runs .github/build , and assumes that will produce your binaries, which are then uploaded.
The upload-action requires the use of a secret, but it seems to be handled by
magic - I didn't create one. I suspect GITHUB_TOKEN is a magic-secret which
is generated on-demand.
Anyway I updated a few projects, and you can see their configuration by looking at .github within the repository:
All in all it was worth the few hours I spent on it, and now I no longer use Travis-CI. The cost? I guess now I'm tied to github some more...
Tags: automation, github, golang, travis
|
|