About Archive Tags RSS Feed

 

Entries posted in September 2014

systemd, a brave new world

4 September 2014 21:50

After spending a while fighting with upstart, at work, I decided that systemd couldn't be any worse and yesterday morning upgraded one of my servers to run it.

I have two classes of servers:

  • Those that run standard daemons, with nothing special.
  • Those that run different services under runit
    • For example docker guests, node.js applications, and similar.

I thought it would be a fair test to upgrade one of each systems, to see how it worked.

The Debian wiki has instructions for installing Systemd, and both systems came up just fine.

Although I realize I should replace my current runit jobs with systemd units I didn't want to do that. So I wrote a systemd .service file to launch runit against /etc/service, as expected, and that was fine.

Docker was a special case. I wrote a docker.service + docker.socket file to launch the deamon, but when I wrote a graphite.service file to start a docker instance it kept on restarting, or failing to stop.

In short I couldn't use systemd to manage running a docker guest, but that was probably user-error. For the moment the docker-host has a shell script in root's home directory to launch the guest:

#!/bin/sh
#
# Run Graphite in a detached state.
#
/usr/bin/docker run -d -t -i -p 8080:80 -p 2003:2003 skxskx/graphite

Without getting into politics (ha), systemd installation seemed simple, resulted in a faster boot, and didn't cause me horrific problems. Yet.

ObRandom: Not sure how systemd is controlling prosody, for example. If I run the status command I can see it is using the legacy system:

root@chat ~ # systemctl status prosody.service 
prosody.service - LSB: Prosody XMPP Server
      Loaded: loaded (/etc/init.d/prosody)
      Active: active (running) since Wed, 03 Sep 2014 07:59:44 +0100; 18h ago
      CGroup: name=systemd:/system/prosody.service
          └ 942 lua5.1 /usr/bin/prosody

I've installed systemd and systemd-sysv, so I thought /etc/init.d was obsolete. I guess it is making pretend-services for things it doesn't know about (because obviously not all packages contain /lib/systemd/system entries), but I'm unsure how that works.

| 5 comments

 

If you signed my old key, please consider repeating the process

4 September 2014 21:50

I'm in the process of rejoining the Debian project. When I was previously a member I had a 1024-bit key, which is considered to be a poor size these days.

Happily I've already generated a new key, which is much bigger.

If you've signed my old key, and thus trust my identity was confirmed at some point in time, then please do consider repeating the process with the new one.

As I've signed the new with the old there should be no concern that it is random/spurious/malicious.

Obviously the ideal scenario is that I meet local-people to perform signing rites, in exchange for cake, beer, or other bribery.

Old key:

pub   1024D/CD4C0D9D 2002-05-29
      Key fingerprint = DB1F F3FB 1D08 FC01 ED22  2243 C0CF C6B3 CD4C 0D9D
uid                  Steve Kemp <steve@steve.org.uk>
sub   2048g/AC995563 2002-05-29

New key:

pub   4096R/0C626242 2014-03-24
      Key fingerprint = D516 C42B 1D0E 3F85 4CAB  9723 1909 D408 0C62 6242
uid                  Steve Kemp (Edinburgh, Scotland) <steve@steve.org.uk>
sub   4096R/229A4066 2014-03-24

| 3 comments

 

kvm-hosting will be ceasing, soon.

10 September 2014 21:50

Seven years ago I wanted to move on from the small virtual machine I had to a larger one. Looking at the the options available it seemed the best approach was to rent a big host, and divide it up into virtual machines myself.

Renting a machine with 8Gb of RAM and 500Gb of disk-space, then dividing that into eights would give a decent spec and assuming that I found enough users to pay for the other slots/shares it would be economically viable too.

After a few weeks I took the plunge, advertised here, and found users.

I had six users:

  • 1/8th for me.
  • 1/8th left empty/idle for the host machine.
  • 6/8th for other users.

There were some niggles, one user seemed to suffer from connectivity problems more than the others, but on the whole the experiment worked out well.

These days, thanks to BigV, Digital Ocean, and all the new-comers there is less need for this kind of thing so last December I announced that the service would cease - and gave all current users 1 year of free service to give them time to migrate away.

The service was due to terminate in December, but triggered by some upcoming downtime where our host would have been moved, in the back of a van, from Manchester to York, I've taken the decision to stop it early.

It was a fun experiment, it provided me with low cost hosting (subsidized by the other paying users), and provided some other people with hosting of their own that was setup nicely.

The only outstanding question is what to do with the domain-names? I could let them expire, I could try to sell them, or I could donate them to other people running hosting setups.

If anybody reading this has a use for kvm-hosting.org, kvm-hosting.net, or kvm-hosting.com, then do feel free to get in touch. No promises, obviously, but it'd be a shame for them to end up hosting adverts in a year or twos time..

| 4 comments

 

A small email utility and other updates.

11 September 2014 21:50

Last night I was looking for an image I knew a model had mailed me a few months ago, as we were talking about rescheduling a shoot at the weekend. I couldn't find it, even with my awesome mail client and filing system.

With some free time I figured I could write a little utility to dump all attachments from email folders, and find it that way.

It did cross my mind that there is the simple mail-utility for dumping headers, etc, called formail, which is distributed alongside procmail, but it doesn't handle attachments ..

I was tempted to write a general purpose script to dump attachments, email header values, etc, etc but given the lack of time I merely solved my own problem.

I suspect there is room for a "mail utilities" package, similar to Joey's "moreutils" and my "sysadmin utils". However I note that there is a GNU Mailutils which does things differently than I'd expect - i.e. it contains a POP3 server.

Still if you want to dump attachments from emails, have GMIME installed, and want to filter by attachment-name, or MIME-type, you might look at my trivial attachment-dump program.

Related to that I spent some time last night updating my photography site, so the animals & pets section has updated images at least.

During the course of that I found a bug in my static-site generator, templer which stopped it from automatically populating image height/widths when called in a glob:

Title: Pets &amp; Animals
Images: file_glob( "*.jpg" )
---

This is the page body, it now has access to a variable called 'images'
which is a HTML::Template loop-structure containing name/height/width/etc
for each image in the current directory.

That should now be resolved, and life should once again be good.

| 2 comments

 

Storing and distributing secrets.

12 September 2014 21:50

I run a number of hosts, and they are controlled via a server automation tool I wrote called slaughter [Documentation].

The policies I use to control my hosts are public and I don't want to make them private because they server as good examples.

Because the roles are public I don't want to embed passwords in them, which means I need something to hold secrets securely. In my case secrets are things like plaintext-passwords. I want those secrets to be secure and unavailable from untrusted hosts.

The simplest solution I could think of was an IP-address based ACL and a simple webserver. A client requests something like:

  • http://secret.example.com/user-passwords

That returns a JSON object, if the requesting host is permitted to read the data. Otherwise it returns a HTTP 403 error.

The layout is very simple:

|-- secrets
|   |-- 206.190.139.148
|   |   `-- auth.json
|   |-- 127.0.0.1
|   |   `-- example.json
|   `-- 80.68.84.109
|       `-- chat.json

Each piece of data is beneath a directory/symlink which controls the read-only access. If the request comes in from the suitable IP it is granted, if not it is denied.

For example a failing case:

skx@desktop ~ $ curl  http://sss.steve.org.uk/chat
missing/permission denied

A working case :

root@chat ~ # curl  http://sss.steve.org.uk/chat
{ "steve": "haha", "bot": "notreally" }

(The JSON suffix is added automatically.)

It is hardly rocket-science, but I couldn't find anything else packaged neatly for this - only things like auth/secstore and factotum. So I'll share if it is useful.

Simple Secret Sharing, or Steve's secret storage.

| 5 comments

 

Applications updating & phoning home

16 September 2014 21:50

Personally I believe that any application packaged for Debian should neither phone home, attempt to download plugins over HTTP at run-time, or update itself.

On that basis I've filed #761828.

As a project we have guidelines for what constitutes a "serious" bug, which generally boil down to a package containing a security issue, causing data-loss, or being unusuable.

I'd like to propose that these kind of tracking "things" are equally bad. If consensus could be reached that would be a good thing for the freedom of our users.

(Ooops I slipped into "us", "our user", I'm just an outsider looking in. Mostly.)

| 4 comments

 

If this goes well I have a new blog engine

17 September 2014 21:50

Assuming this post shows up then I'll have successfully migrated from Chronicle to a temporary replacement.

Chronicle is awesome, and despite a lack of activity recently it is not dead. (No activity because it continued to do everything I needed for my blog.)

Unfortunately though there is a problem with chronicle, it suffers from a bit of a performance problem which has gradually become more and more vexing as the nubmer of entries I have has grown.

When chronicle runs it :

  • It reads each post into a complex data-structure.
  • Then it walks this multiple times.
  • Finally it outputs a whole bunch of posts.

In the general case you rebuild a blog because you've made a entry, or received a new comment. There is some code which tries to use memcached for caching, but in general chronicle just isn't fast and it is certainly memory-bound if you have a couple of thousand entries.

Currently my test data-set contains 2000 entries and to rebuild that from a clean start takes around 4 minutes, which is pretty horrific.

So what is the alternative? What if you could parse each post once, add it to an SQLite database, and then use that for writing your output pages? Instead of the complex data-structure in-RAM and the need to parse a zillion files you'd have a standard/simple SQL structure you could use to build a tag-cloud, an archive, & etc. If you store the contents of the parsed-blog, along with the mtime of the source file you can update it if the entry is changed in the future, as I sometimes make typos which I only spot once Ive run make steve on my blog sources.

Not surprisingly the newer code is significantly faster if you have 2000+ posts. If you've imported the posts into SQLite the most recent entries are updated in 3 seconds. If you're starting cold, parsing each entry, inserting it into SQLite, and then generating the blog from scratch the build time is still less than 10 seconds.

The downside is that I've removed features, obviously nothing that I use myself. Most notably the calendar view is gone, as is the ability to use date-based URLs. Less seriously there is only a single theme, which is what is used upon this site.

In conclusion I've written something last night which is a stepping stone between the current chronicle and chronicle2 which will appear in due course.

PS. This entry was written in markdown, just because I wanted to be sure it worked.

| 9 comments

 

Waiting for features upstream

23 September 2014 21:50

I (grudgingly) use the Calibre e-book management software to handle my collection of books, and copy them over to my kindle-toy.

One thing that has always bothered me was the fact that when books are imported their ratings are too. If I receive a small sample of ebooks from a friend their ratings are added to my collections.

I've always regarded ratings as things personal to me, rather than attributes of a book itself; as my tastes might not match yours, and vice-versa.

On that basis the last time I was importing a small number of books and getting annoyed at having to manually reset all the imported ratings I decided to do something about it. I started hacking and put together a simple Calibre plugin to automatically zero ratings when books are imported to the collection (i.e. set the rating to be zero).

Sadly this work wasn't painless, despite the small size, as an unfortunate bug in Calibre meant my plugin method wasn't called. Happily Kovid Goyal helped me work through the problem, and he committed a fix that will be in the next Calibre release. For the moment I'm using today's git-snapshot and it works well.

Similarly I've recently started using extended file attributes to store metadata on my desktop system. Unfortunately the GNU findutils package doesn't allow you to do the obvious thing:

$ find ~/foo -xattr user.comment
/home/skx/foo/bar/t.txt
/home/skx/foo/bar/xc.txt
/home/skx/foo/bar/x.txt

There are several xattr patches floating around, but I had to bundle my own in debian/patches to get support for finding files that have particular attribute names.

Maybe one day extended attributes will be taken seriously. (rsync, cp, etc will preserve them. I'm hazy on the compatibility with tar, but most things seem to be working.)

| 4 comments

 

Today I mostly removed python

25 September 2014 21:50

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

| 11 comments

 

Next week I shall be mostly in Kraków

26 September 2014 21:50

Next week my wife and I shall be mostly visiting Poland, and spending a week in Kraków.

It has been a while since I've had a non-Helsinki-based holiday, so I'm looking forward to the trip.

In other news I've been rationalising DNS entries and domain names recently, all being well this zone should be served by Amazon shortly, subject to the usual combination of TTLs and resolution-puns.

| 1 comment