About Archive Tags RSS Feed

 

Entries posted in November 2014

IPv6 only server

2 November 2014 21:50

I enjoy the tilde.club site/community, and since I've just setup an IPv6-only host I was looking to do something similar.

Unfortunately my (working) code to clone github repositories into per-user directories fails - because github isn't accessible over IPv6.

That's a shame.

Oddly enough chromium, the browser packaged for wheezy, doesn't want to display IPv6-only websites either. For example this site fail to load http://ipv6.steve.org.uk/.

In the meantime I've got a server setup which is only accessible over IPv6 and I'm a little smug. (http://ipv6.website/).

(Yes it is true that I've used all the IPv4 addreses allocated to my VLAN. That's just a coincidence. Ssh!)

| 6 comments

 

Planning how to configure my next desktop

6 November 2014 21:50

I recently setup a bunch of IPv6-only accessible hosts, which I mentioned in my previous blog post.

In the end I got them talking to the IPv4/legacy world via the installation of an OpenVPN server - they connect over IPv6 get a private 10.0.0.0/24 IP address, and that is masqueraded via the OpenVPN-gateway.

But the other thing I've been planning recently is how to configure my next desktop system. I generally do all development, surfing, etc, on one desktop system. I use virtual desktops to organize things, and I have a simple scripting utility to juggle windows around into the correct virtual-desktop as they're launched.

Planning a replacement desktop means installing a fresh desktop, then getting all the software working again. These days I'd probably use docker images to do development within, along with a few virtual machines (such as the pbuilder host I used to release all my Debian packages).

But there are still niggles. I'd like to keep the base system lean, with few packages, but you can't run xine remotely, similarly I need mpd/sonata for listening to music, emacs for local stuff, etc, etc.

In short there is always the tendency to install yet-another package, service, or application on the desktop, which makes migration a pain.

I'm not sure I could easily avoid that, but it is worth thinking about. I guess I could configure a puppet/slaughter/cfengine host and use that to install the desktop - but I've always done desktops "manually" and servers "magically" so it's a bit of a change in thinking.

| 2 comments

 

Some brief notes on Docker

8 November 2014 21:50

Docker is the well-known tool for building, distributing, and launching containers.

I use it personally to run a chat-server, a graphite instance, and I distribute some of my applications with Dockerfiles too, to ease deployment.

Here are some brief notes on things that might not be obvious.

For a start when you create a container it is identified by a 64-byte ID. This ID is truncated and used as the hostname of the new guest - but if you ever care you can discover the full ID from within the guest:

~# awk -F/ '{print $NF}' /proc/self/cgroup
9d16624a313bf5bb9eb36f4490b5c2b7dff4f442c055e99b8c302edd1bf26036

Compare that with the hostname:

~# hostname
9d16624a313b

Assigning names to containers is useful, for example:

$ docker run -d -p 2222:22 --name=sshd skxskx/sshd

However note that names must be removed before they can be reused:

#!/bin/sh
# launch my ssh-container - removing the name first
docker rm  sshd || true
docker run --name=sshd -d -p 2222:22 skxskx/sshd

The obvious next step is to get the IP of the new container, and setup a hostname for it sshd.docker. Getting the IP is easy, via either the name of the ID:

~$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshd
172.17.0.2

The only missing step is the ability to do that magically. You'd hope there would be a hook that you could run when a container has started - unfortunately there is no such thing. Instead you have two choices:

  • Write a script which parses the output of "docker events" and fires appropriately when a guest is created/destroyed.
  • Write a wrapper script for launching containers, and use that to handle the creation.

I wrote a simple watcher to fire when events are created, which lets me do the job.

But running a deamon just to watch for events seems like the wrong way to go. Instead I've switched to running via a wrapper dock-run:

$ dock-run --name=sshd -d -p 2222:22 skxskx/sshd

This invokes run-parts on the creation directory, if present, and that allows me to update DNS. So "sshd.docker.local" will point to the IP of the new image.

The wrapper was two minutes work, but it does work, and if you like you can find it here.

That concludes my notes on docker - although you can read articles I wrote on docker elsewhere.

| No comments

 

How could you rationally fork Debian?

9 November 2014 21:50

The topic of Debian forks has come up a lot recently, and as time goes on I've actually started considering the matter seriously: How would you fork Debian?

The biggest stumbling block is that the Debian distribution contains thousands of packages, which are maintained by thousands of developers. A small team has virtually no hope of keeping up to date, importing changes, dealing with bug-reports, etc. Instead you have to pick your battle and decide what you care about.

This is why Ubuntu split things into "main" and "universe". Because this way they didn't have to deal with bug reports - instead they could just say "Try again in six months. Stuff from that repository isn't supported. Sorry!"

So if you were going to split the Debian project into "supported" and "unsupported" what would you use as the dividing line? I think the only sensible approach would be :

  • Base + Server stuff.
  • The rest.

On that basis you'd immediately drop the support burden of GNOME, KDE, Firefox, Xine, etc. All the big, complex, and user-friendly stuff would just get thrown away. What you'd end up with would be a Debian-Server fork, or derivative.

Things you'd package and care about would include:

  • The base system.
  • The kernel.
  • SSHD.
  • Apache / Nginx / thttpd / lighttpd / etc
  • PHP / Perl / Ruby / Python / etc
  • Jabberd / ircd / rsync / etc
  • MySQL / PostGres / Redis / MariadB / etc.

Would that be a useful split? I suspect it would. It would also be manageable by a reasonably small team.

That split would also mean if you were keen on dropping any particular init-system you'd not have an unduly difficult job - your server wouldn't be running GNOME, for example.

Of course if you're thinking of integrating a kernel and server-only stuff then you might instead prefer a BSD-based distribution. But if you did that you'd miss out on Docker. Hrm.

| 10 comments

 

An experiment in (re)building Debian

20 November 2014 21:50

I've rebuilt many Debian packages over the years, largely to fix bugs which affected me, or to add features which didn't make the cut in various releases. For example I made a package of fabric available for Wheezy, since it wasn't in the release. (Happily in that case a wheezy-backport became available. Similar cases involved repackaging gtk-gnutella when the protocol changed and the official package in the lenny release no longer worked.)

I generally release a lot of my own software as Debian packages, although I'll admit I've started switching to publishing Perl-based projects on CPAN instead - from which they can be debianized via dh-make-perl.

One thing I've not done for many years is a mass-rebuild of Debian packages. I did that once upon a time when I was trying to push for the stack-smashing-protection inclusion all the way back in 2006.

Having had a few interesting emails this past week I decided to do the job for real. I picked a random server of mine, rsync.io, which stores backups, and decided to rebuild it using "my own" packages.

The host has about 300 packages installed upon it:

root@rsync ~ # dpkg --list | grep ^ii | wc -l
294

I got the source to every package, patched the changelog to bump the version, and rebuild every package from source. That took about three hours.

Every package has a "skx1" suffix now, and all the build-dependencies were also determined by magic and rebuilt:

root@rsync ~ # dpkg --list | grep ^ii | awk '{ print $2 " " $3}'| head -n 4
acpi 1.6-1skx1
acpi-support-base 0.140-5+deb7u3skx1
acpid 1:2.0.16-1+deb7u1skx1
adduser 3.113+nmu3skx1

The process was pretty quick once I started getting more and more of the packages built. The only shortcut was not explicitly updating the dependencies to rely upon my updages. For example bash has a Debian control file that contains:

Depends: base-files (>= 2.1.12), debianutils (>= 2.15)

That should have been updated to say:

Depends: base-files (>= 2.1.12skx1), debianutils (>= 2.15skx1)

However I didn't do that, because I suspect if I did want to do this decently, and I wanted to share the source-trees, and the generated packages, the way to go would not be messing about with Debian versions instead I'd create a new Debian release "alpha-apple", "beta-bananna", "crunchy-carrot", "dying-dragonfruit", "easy-elderberry", or similar.

In conclusion: Importing Debian packages into git, much like Ubuntu did with bzr, is a fun project, and it doesn't take much to mass-rebuild if you're not making huge changes. Whether it is worth doing is an entirely different question of course.

| 2 comments

 

Lumail 2.x ?

22 November 2014 21:50

I've continued to ponder the idea of reimplementing the console mail-client I wrote, lumail, using a more object-based codebase.

For one thing having loosely coupled code would allow testing things in isolation, which is clearly a good thing.

I've written some proof of concept code which will allow the following Lua to be evaluated:

-- Open the maildir.
users = Maildir.new( "/home/skx/Maildir/.debian.user" )

-- Count the messages.
print( "There are " .. users:count() .. " messages in the maildir " .. users:path() )

--
-- Now we want to get all the messages and output their paths.
--
for k,v in ipairs( users:messages()) do
    --
    -- Here we could do something like:
    --
    --   if ( string.find( v:headers["subject"], "troll", 1, true ) ) then v:delete()  end
    --
    -- Instead play-nice and just show the path.
    print( k .. " -> " .. v:path() )
end

This is all a bit ugly, but I've butchered some code together that works, and tried to solicit feedback from lumail users.

I'd write more but I'm tired, and intending to drink whisky and have an early night. Today I mostly replaced pipes in my attic. (Is it "attic", or is it "loft"? I keep alternating!) Fingers crossed this will mean a dry kitchen in the morning.

| No comments