|
Entries posted in August 2014
5 August 2014 21:50
In the past I used to pay for an email->SMS gateway, which was used to alert me about some urgent things. That was nice because it was bi-directional, and at one point I could restart particular services via sending SMS messages.
These days I get it for free, and for my own reference here is how you get to receive free SMS alerts via Orange, which is my mobile phone company. If you don't use Orange/EE this will probably not help you.
The first step is to register an Orange email-account, which can be done here:
Once you've done that you'll have an email address of the form [email protected], which is kinda-sorta linked to your mobile number. You'll sign in and be shown something that looks like webmail from the early 90s.
The thing that makes this interesting is that you can look in the left-hand menu and see a link called "SMS Alerts". Visit it. That will let you do things like set the number of SMSs you wish to receive a month (I chose "1000"), and the hours during which delivery will be made (I chose "All the time").
Anyway if you go through this dance you'll end up with an email address [email protected], and when an email arrives at that destination an SMS will be sent to your phone.
The content of the SMS will be the subject of the mail, truncated if necessary, so you can send a hello message to yourself like this:
echo "nop" | mail -s "Hello, urgent message is present" [email protected]
Delivery seems pretty reliable, and I've scheduled the mailbox to be purged every week, to avoid it getting full:
Hostname | pop.orange.net |
Username | Your mobile number |
Password | Your password |
If you wished to send mail from this you can use smtp.orange.net, but I pity the fool who used their mobile phone company for their primary email address.
Tags: orange, random, sms
|
9 August 2014 21:50
I run a cluster for the Debian Administration website, and the code is starting to show its age. Unfortunately the code is not so modern, and has evolved a lot of baggage.
Given the relatively clean separation between the logical components I'm interested in trying something new. In brief the current codebase allows:
- Posting of articles, blog-entries, and polls.
- The manipulation of the same.
- User-account management.
It crossed my mind the other night that it might make sense to break this code down into a number of mini-servers - a server to handle all article-related things, a server to handle all poll-related things, etc.
If we have a JSON endpoint that will allow:
- GET /article/32
- POST /article/ [create]
- GET /articles/offset/number [get the most recent]
Then we could have a very thin shim/server on top of that whihc would present the public API. Of course the internal HTTP overhead might make this unworkable, but it is an interesting approach to the problem, and would allow the backend storage to be migrated in the future without too much difficulty.
At the moment I've coded up two trivial servers, one for getting user-data (to allow login requests to succeed), and one for getting article data.
There is a tiny presentation server written to use those back-end servers and it seems like an approach that might work. Of course deployment might be a pain..
It is still an experiment rather than a plan, but it could work out: http://github.com/skx/snooze/.
Tags: snooze, yawns
|
15 August 2014 21:50
This is a random post inspired by recent purchases. Some things we buy are practical, others are a little arbitrary.
I tend to avoid buying things for the sake of it, and have explicitly started decluttering our house over the past few years. That said sometimes things just seem sufficiently "cool" that they get bought without too much thought.
This entry is about two things.
A couple of years ago my bathroom was ripped apart and refitted. Gone was the old and nasty room, and in its place was a glorious space. There was only one downside to the new bathroom - you turn on the light and the fan comes on too.
When your wife works funny shifts at the hospital you can find that the (quiet) fan sounds very loud in the middle of the night and wakes you up..
So I figured we could buy a couple of LED lights and scatter them around the place - when it is dark the movement sensors turn on the lights.
These things are amazing. We have one sat on a shelf, one velcroed to the bottom of the sink, and one on the floor, just hidden underneath the toilet.
Due to the shiny-white walls of the room they're all you need in the dark.
By contrast my second purchase was a mistake - The Logitech Harmony 650 Universal Remote Control should be great. It clearly has the features I want - Able to power:
- Our TV.
- Our Sky-box.
- OUr DVD player.
The problem is solely due to the horrific software. You program the device via an application/website which works only under Windows.
I had to resort to installing Windows in a virtual machine to make it run:
# Get the Bus/ID for the USB device
bus=$(lsusb |grep -i Harmony | awk '{print $2}' | tr -d 0)
id=$(lsusb |grep -i Harmony | awk '{print $4}' | tr -d 0:)
# pass to kvm
kvm -localtime .. -usb -device usb-host,hostbus=$bus,hostaddr=$id ..
That allows the device to be passed through to windows, though you'll later have to jump onto the Qemu console to re-add the device as the software disconnects and reconnects it at random times, and the bus changes. Sigh.
I guess I can pretend it works, and has cut down on the number of remotes sat on our table, but .. The overwhelmingly negative setup and configuration process has really soured me on it.
There is a linux application which will take a configuration file and squirt it onto the device, when attached via a USB cable. This software, which I found during research prior to buying it, is useful but not as much as I'd expected. Why? Well the software lets you upload the config file, but to get a config file you must fully complete the setup on Windows. It is impossible to configure/use this device solely using GNU/Linux.
(Apparently there is MacOS software too, I don't use macs. *shrugs*)
In conclusion - Motion-activated LED lights, more useful than expected, but Harmony causes Discord.
Tags: random
|
21 August 2014 21:50
Recently I've been getting annoyed with the Debian Administration website; too often it would be slower than it should be considering the resources behind it.
As a brief recap I have six nodes:
- 1 x MySQL Database - The only MySQL database I personally manage these days.
- 4 x Web Nodes.
- 1 x Misc server.
The misc server is designed to display events. There is a node.js listener which receives UDP messages and stores them in a rotating buffer. The messages might contain things like "User bob logged in", "Slaughter ran", etc. It's a neat hack which gives a good feeling of what is going on cluster-wide.
I need to rationalize that code - but there's a very simple predecessor posted on github for the curious.
Anyway enough diversions, the database is tuned, and "small". The misc server is almost entirely irrelevent, non-public, and not explicitly advertised.
So what do the web nodes run? Well they run a lot. Potentially.
Each web node has four services configured:
- Apache 2.x - All nodes.
- uCarp - All nodes.
- Pound - Master node.
- Varnish - Master node.
Apache runs the main site, listening on *:8080.
One of the nodes will be special and will claim a virtual IP provided via ucarp. The virtual IP is actually the end-point visitors hit, meaning we have:
Master Host | Other hosts |
Running:
|
Running:
|
Pound is configured to listen on the virtual IP and perform SSL termination. That means that incoming requests get proxied from "vip:443 -> vip:80". Varnish listens on "vip:80" and proxies to the back-end apache instances.
The end result should be high availability. In the typical case all four servers are alive, and all is well.
If one server dies, and it is not the master, then it will simply be dropped as a valid back-end. If a single server dies and it is the master then a new one will appear, thanks to the magic of ucarp, and the remaining three will be used as expected.
I'm sure there is a pathological case when all four hosts die, and at that point the site will be down, but that's something that should be atypical.
Yes, I am prone to over-engineering. The site doesn't have any availability requirements that justify this setup, but it is good to experiment and learn things.
So, with this setup in mind, with incoming requests (on average) being divided at random onto one of four hosts, why is the damn thing so slow?
We'll come back to that in the next post.
(Good news though; I fixed it ;)
Tags: brightbox, debian-administration, yawns
|
23 August 2014 21:50
So I previously talked about the setup behind Debian Administration, and my complaints about the slownes.
The previous post talked about the logical setup, and the hardware. This post talks about the more interesting thing. The code.
The code behind the site was originally written by Denny De La Haye. I found it and reworked it a lot, most obviously adding structure and test cases.
Once I did that the early version of the site was born.
Later my version became the official version, as when Denny setup Police State UK he used my codebase rather than his.
So the code huh? Well as you might expect it is written in Perl. There used to be this layout:
yawns/cgi-bin/index.cgi
yawns/cgi-bin/Pages.pl
yawns/lib/...
yawns/htdocs/
Almost every request would hit the index.cgi script, which would parse the request and return the appropriate output via the standard CGI interface.
How did it know what you wanted? Well sometimes there would be a paramater set which would be looked up in a dispatch-table:
/cgi-bin/index.cgi?article=40 - Show article 40
/cgi-bin/index.cgi?view_user=Steve - Show the user Steve
/cgi-bin/index.cgi?recent_comments=10 - Show the most recent comments.
Over time the code became hard to update because there was no consistency, and over time the site became slow because this is not a quick setup. Spiders, bots, and just average users would cause a lot of perl processes to run.
So? What did I do? I moved the thing to using FastCGI, which avoids the cost of forking Perl and loading (100k+) the code.
Unfortunately this required a bit of work because all the parameter handling was messy and caused issues if I just renamed index.cgi -> index.fcgi. The most obvious solution was to use one parameter, globally, to specify the requested mode of operation.
Hang on? One parameter to control the page requested? A persistant environment? What does that remind me of? Yes. CGI::Application.
I started small, and pulled some of the code out of index.cgi + Pages.pl, and over into a dedicated CGI::Application class:
- Application::Feeds - Called via /cgi-bin/f.fcgi.
- Application::Ajax - Called via /cgi-bin/a.fcgi.
So now every part of the site that is called by Ajax has one persistent handler, and every part of the site which returns RSS feeds has another.
I had some fun setting up the sessions to match those created by the old stuff, but I quickly made it work, as this example shows:
The final job was the biggest, moving all the other (non-feed, non-ajax) modes over to a similar CGI::Application structure. There were 53 modes that had to be ported, and I did them methodically, first porting all the Poll-related requests, then all the article-releated ones, & etc. I think I did about 15 a day for three days. Then the rest in a sudden rush.
In conclusion the code is now fast because we don't use CGI, and instead use FastCGI.
This allowed minor changes to be carried out, such as compiling the HTML::Template templates which determine the look and feel, etc. Those things don't make sense in the CGI environment, but with persistence they are essentially free.
The site got a little more of a speed boost when I updated DNS, and a lot more when I blacklisted a bunch of IP-space.
As I was wrapping this up I realized that the code had accidentally become closed - because the old repository no longer exists. That is not deliberate, or intentional, and will be rectified soon.
The site would never have been started if I'd not seen Dennys original project, and although I don't think others would use the code it should be possible. I remember at the time I was searching for things like "Perl CMS" and finding Slashcode, and Scoop, which I knew were too heavyweight for my little toy blog.
In conclusion Debian Administration website is 10 years old now. It might not have changed the world, it might have become less relevant, but I'm glad I tried, and I'm glad there were years when it really was the best place to be.
These days there are HowtoForges, blogs, spam posts titled "How to install SSH on Trusty", "How to install SSH on Wheezy", "How to install SSH on Precise", and all that. No shortage of content, just finding the good from the bad is the challenge.
Me? The single best resource I read these days is probably LWN.net.
Starting to ramble now.
Go look at my quick hack for remote command execution https://github.com/skx/nanoexec ?
Tags: debian-administration, yawns
|
25 August 2014 21:50
To round up the discussion of the Debian Administration site yesterday I flipped the switch on the load-balancing. Rather than this:
https -> pound \
\
http -------------> varnish --> apache
We now have the simpler route for all requests:
http -> haproxy -> apache
https -> haproxy -> apache
This means we have one less HTTP-request for all incoming secure connections, and these days secure connections are preferred since a Strict-Transport-Security header is set.
In other news I've been juggling git repositories; I've setup an installation of GitBucket on my git-host. My personal git repository used to contain some private repositories and some mirrors.
Now it contains mirrors of most things on github, as well as many more private repositories.
The main reason for the switch was to get a prettier interface and bug-tracker support.
A side-benefit is that I can use "groups" to organize repositories, so for example:
Most of those are mirrors of the github repositories, but some are new. When signed in I see more sources, for example the source to http://steve.org.uk.
I've been pleased with the setup and performance, though I had to add some caching and some other magic at the nginx level to provide /robots.txt, etc, which are not otherwise present.
I'm not abandoning github, but I will no longer be using it for private repositories (I was gifted a free subscription a year or three ago), and nor will I post things there exclusively.
If a single canonical source location is required for a repository it will be one that I control, maintain, and host.
I don't expect I'll give people commit access on this mirror, but it is certainly possible. In the past I've certainly given people access to private repositories for collaboration, etc.
Tags: git, github, yawns
|
29 August 2014 21:50
Yesterday I carried out the upgrade of a Debian host from Squeeze to Wheezy for a friend. I like doing odd-jobs like this as they're generally painless, and when there are problems it is a fun learning experience.
I accidentally forgot to check on the status of the MySQL server on that particular host, which was a little embarassing, but later put together a reasonably thorough serverspec recipe to describe how the machine should be setup, which will avoid that problem in the future - Introduction/tutorial here.
The more I use serverspec the more I like it. My own personal servers have good rules now:
shelob ~/Repos/git.steve.org.uk/server/testing $ make
..
Finished in 1 minute 6.53 seconds
362 examples, 0 failures
Slow, but comprehensive.
In other news I've now migrated every single one of my personal mercurial repositories over to git. I didn't have a particular reason for doing that, but I've started using git more and more for collaboration with others and using two systems felt like an annoyance.
That means I no longer have to host two different kinds of repositories, and I can use the excellent gitbucket software on my git repository host.
Needless to say I wrote a policy for this host too:
#
# The host should be wheezy.
#
describe command("lsb_release -d") do
its(:stdout) { should match /wheezy/ }
end
#
# Our gitbucket instance should be running, under runit.
#
describe supervise('gitbucket') do
its(:status) { should eq 'run' }
end
#
# nginx will proxy to our back-end
#
describe service('nginx') do
it { should be_enabled }
it { should be_running }
end
describe port(80) do
it { should be_listening }
end
#
# Host should resolve
#
describe host("git.steve.org.uk" ) do
it { should be_resolvable.by('dns') }
end
Simple stuff, but being able to trigger all these kind of tests, on all my hosts, with one command, is very reassuring.
Tags: git, github, serverspec
|
31 August 2014 21:50
Today we have a little diversion to talk about the National Health Service. The NHS is the publicly funded healthcare system in the UK.
Actually there are four such services in the UK, only one of which has this name:
- The national health service (England)
- Health and Social Care in Northern Ireland.
- NHS Scotland.
- NHS Wales.
In theory this doesn't matter, if you're in the UK and you break your leg you get carried to a hospital and you get treated. There are differences in policies because different rules apply, but the basic stuff "free health care" applies to all locations.
(Differences? In Scotland you get eye-tests for free, in England you pay.)
My wife works as an accident & emergency doctor, and has recently changed jobs. Hearing her talk about her work is fascinating.
The hospitals she's worked in (Dundee, Perth, Kirkcaldy, Edinburgh, Livingstone) are interesting places. During the week things are usually reasonably quiet, and during the weekend things get significantly more busy. (This might mean there are 20 doctors to hand, versus three at quieter times.)
Weekends are busy largely because people fall down hills, get drunk and fight, and are at home rather than at work - where 90% of accidents occur.
Of course even a "quiet" week can be busy, because folk will have heart-attacks round the clock, and somebody somewhere will always be playing with a power tool, a ladder, or both!
So what was the point of this post? Well she's recently transferred to working for a childrens hospital (still in A&E) and the patiences are so very different.
I expected the injuries/patients she'd see to differ. Few 10 year olds will arrive drunk (though it does happen), and few adults fall out of trees, or eat washing machine detergent, but talking to her about her day when she returns home is fascinating how many things are completely different from how I expected.
Adults come to hospital mostly because they're sick, injured, or drunk.
Children come to hospital mostly because their parents are paranoid.
A child has a rash? Doctors are closed? Lets go to the emergency ward!
A child has fallen out of a tree and has a bruise, a lump, or complains of pain? Doctors are closed? Lets go to the emergency ward!
I've not kept statistics, though I wish I could, but it seems that she can go 3-5 days between seeing an actually injured or chronicly-sick child. It's the first-time-parents who bring kids in when they don't need to.
Understandable, completely understandable, but at the same time I'm sure it is more than a little frustrating for all involved.
Finally one thing I've learned, which seems completely stupid, is the NHS-Scotland approach to recruitment. You apply for a role, such as "A&E doctor" and after an interview, etc, you get told "You've been accepted - you will now work in Glasgow".
In short you apply for a post, and then get told where it will be based afterward. There's no ability to say "I'd like to be a Doctor in city X - where I live", you apply, and get told where it is post-acceptance. If it is 100+ miles away you either choose to commute, or decline and go through the process again.
This has lead to Kirsi working in hospitals with a radius of about 100km from the city we live in, and has meant she's had to turn down several posts.
And that is all I have to say about the NHS for the moment, except for the implicit pity for people who have to pay (inflated and life-changing) prices for things in other countries.
Tags: children, doctors, nhs, random
|
|