|
Entries tagged docker
6 January 2014 21:50
Recently I wrote about docker, after a brief diversion into using runit for service management, I then wrote about it some more.
I'm currently setting up a new PXE-boot environment which uses docker for serving DHCP and TFTPD, which is my first "real" usage of any note. It is fun, although I now discover I'm not alone in using docker for this purpose.
Otherwise life is good, and my blog-spam detection service recently broke through the 11 million-rejected-comment barrier. The Wordpress Plugin is seeing a fair amount of use, which is encouraging - but more reviews would be nice ;)
I could write about work, I've not done that since changing job, but I'm waiting for something disruptive to happen first..
ObQuote: Dune. (film)
Tags: blogspam, docker, work
|
11 January 2014 21:50
Having decided to take a fortnight off, between looking for a new job, I assumed I'd spend a while coding.
Happily my wife, who is a (medical) doctor, has been home recently so we've got to spend time together instead.
I'm currently pondering projects which will be small enough to be complete in a week, but large enough to be useful. Thus far I've just reimplemented RSS -> chat which I liked a lot at Bytemark.
I have my own chat-server setup, which doesn't have any users but myself. Instead it has a bunch of rooms setup, and different rooms get different messages.
I've now created a new "RSS" room, and a bunch of RSS feeds get announced there when new posts appear. It's a useful thing if you like following feeds, and happen to have a chat-room setup.
I use Prosody as my chat-server, and I use my http2xmpp code to implement a simple HTTP-POST to XMPP broadcast mechanism.
The new script is included as examples/rss-announcer and just polls RSS feeds - URLs which haven't been broadcast previously are posted to the HTTP-server, and thus get injected into the chatroom. A little convoluted, but simple to understand.
This time round I'm using Redis to keep track of which URLs have been seen already.
Beyond that I've been doing a bit of work for friends, and have recently setup an nginx server which will handle 3000+ simultaneous connections. Not too bad, but I'm sure we can make it do better - another server running on BigV which is nice to see :)
I'll be handling a few Squeeze -> Wheezy upgrades in the next week too, setting up backups, and doing some other related "consultation".
If I thought there was a big enough market locally I might consider doing that full-time, but I suspect that relying upon random work wouldn't work long-term.
Tags: docker, http2xmpp, redis
|
17 February 2014 21:50
I've updated my markdown-pastebin site, to be a little cleaner, and to avoid spidering issues.
Previously every piece of uploaded text received an incrementing integer to describe it - which meant it was trivially easy for others to see how many pieces of text had been uploaded, and to spider all past uploads (unless the user deleted them).
Now each fresh paste receives a random UUID to describe it, and this means spidering is no longer feasible.
I've also posted the source code to Gitub so folk can report bugs, fork, etc:
That source code now includes a Dockerfile which allows you to quickly and easily build your own container running this wonderful service, and launch it without worrying about trashing your server ;)
Anyway other than the user-interface overhaul it is still as functional, or not, as it used to be!
Tags: docker, markdown, random
|
22 February 2014 21:50
For the past few years I've hosted all my websites in a "special" way:
- Each website runs under its own UID.
- Each website runs a local thttpd / webserver.
- Each server binds to localhost, on a high-port.
- My recipe is that the port of the webserver for user "foo" is "$(id -u foo)".
- On the front-end I have a proxy to route connections to the appropriate back-end, based on the Host header.
The webserver I chose initially was thttpd, which gained points because it was small, auditable, and simple to launch. Something like this was my recipe:
#!/bin/sh
exec thttpd -D -C /srv/steve.org.uk/thttpd.conf
Unfortunately thttpd suffers from a few omissions, most notably it doesn't support either "Keep-Alive", or "Compression" (i.e. gzip/deflate), so it would always be slower than I wanted.
On the plus side it was simple to use, supported CGI scripts, and
served me well once I'd patched it to support X-Forwarded-For for IPv6 connections.
Recently I setup a server optimization site and was a little disappointed that the site itself scored poorly on Google's page-speed test. So I removed thttpd for that site, and replacing it with nginx. The end result was that the site scored 98/100 on Google's page-speed test. Progress. Unfortunately I couldn't do that globally because nginx doesn't support old-school plain CGI scripts.
So last night I removed both nginx and thttpd, and now every site on my box is hosted using lighttpd.
There weren't too many differences in the setup, though I had to add some rules to add caching for *.css, etc, and some of my code needed updating.
Beyond that today I've setup a dedicated docker host - which allows me to easily spin up containers. Currently I've got graphite monitoring for my random hosts, and a wordpress guest for plugin development/testing.
Now to go back to reading Off to be the wizard .. - not as good as Rick Cook's wizardry series (which got less good as time went on, but started off strongly), but still entertaining.
Tags: docker, graphite, lighttpd, nginx, thttpd
|
10 June 2014 21:50
Some coding updates:
My templer static site generator has now been uploaded to CPAN, and is available as App::Templer.
I've converted most of my Dockerfiles to work with docker 1.0.0, which is nice.
I also hacked up a fun DNS-server for sharing JSON-encoded data, within a LAN or other environment:
Finally I updated the blogspam-detecting site a little, on the back-end. The code is now running inside Docker containers which means I can redeploy more easily in the future.
My blog post about looking for a job received some attention via a Reddit advert I posted to /r/edinburgh + /r/sysadmin, but thus far has mostly resulted in people wanting me to write code for them .. which is frustrating.
For the moment I'm working on a fun challenge involving (email) spam-detection. That takes me back.
Tags: cpan, docker, perl, templer
|
4 September 2014 21:50
After spending a while fighting with upstart, at work, I decided that systemd couldn't be any worse and yesterday morning upgraded one of my servers to run it.
I have two classes of servers:
- Those that run standard daemons, with nothing special.
- Those that run different services under runit
- For example docker guests, node.js applications, and similar.
I thought it would be a fair test to upgrade one of each systems, to see how it worked.
The Debian wiki has instructions for installing Systemd, and both systems came up just fine.
Although I realize I should replace my current runit jobs with systemd units I didn't want to do that. So I wrote a systemd .service file to launch runit against /etc/service, as expected, and that was fine.
Docker was a special case. I wrote a docker.service + docker.socket file to launch the deamon, but when I wrote a graphite.service file to start a docker instance it kept on restarting, or failing to stop.
In short I couldn't use systemd to manage running a docker guest, but that was probably user-error. For the moment the docker-host has a shell script in root's home directory to launch the guest:
#!/bin/sh
#
# Run Graphite in a detached state.
#
/usr/bin/docker run -d -t -i -p 8080:80 -p 2003:2003 skxskx/graphite
Without getting into politics (ha), systemd installation seemed simple, resulted in a faster boot, and didn't cause me horrific problems. Yet.
ObRandom: Not sure how systemd is controlling prosody, for example.
If I run the status command I can see it is using the legacy system:
root@chat ~ # systemctl status prosody.service
prosody.service - LSB: Prosody XMPP Server
Loaded: loaded (/etc/init.d/prosody)
Active: active (running) since Wed, 03 Sep 2014 07:59:44 +0100; 18h ago
CGroup: name=systemd:/system/prosody.service
└ 942 lua5.1 /usr/bin/prosody
I've installed systemd and systemd-sysv, so I thought /etc/init.d was obsolete. I guess it is making pretend-services for things it doesn't know about (because obviously not all packages contain /lib/systemd/system entries), but I'm unsure how that works.
Tags: docker, systemd, wheezy
|
8 November 2014 21:50
Docker is the well-known tool for building, distributing, and launching containers.
I use it personally to run a chat-server, a graphite instance, and I distribute some of my applications with Dockerfiles too, to ease deployment.
Here are some brief notes on things that might not be obvious.
For a start when you create a container it is identified by a 64-byte ID. This ID is truncated and used as the hostname of the new guest - but if you ever care you can discover the full ID from within the guest:
~# awk -F/ '{print $NF}' /proc/self/cgroup
9d16624a313bf5bb9eb36f4490b5c2b7dff4f442c055e99b8c302edd1bf26036
Compare that with the hostname:
~# hostname
9d16624a313b
Assigning names to containers is useful, for example:
$ docker run -d -p 2222:22 --name=sshd skxskx/sshd
However note that names must be removed before they can be reused:
#!/bin/sh
# launch my ssh-container - removing the name first
docker rm sshd || true
docker run --name=sshd -d -p 2222:22 skxskx/sshd
The obvious next step is to get the IP of the new container, and setup a hostname for it sshd.docker. Getting the IP is easy, via either the name of the ID:
~$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' sshd
172.17.0.2
The only missing step is the ability to do that magically. You'd hope there would be a hook that you could run when a container has started - unfortunately there is no such thing. Instead you have two choices:
- Write a script which parses the output of "docker events" and fires appropriately when a guest is created/destroyed.
- Write a wrapper script for launching containers, and use that to handle the creation.
I wrote a simple watcher to fire when events are created, which lets me do the job.
But running a deamon just to watch for events seems like the wrong way to go. Instead I've switched to running via a wrapper dock-run:
$ dock-run --name=sshd -d -p 2222:22 skxskx/sshd
This invokes run-parts on the creation directory, if present, and that allows me to update DNS. So "sshd.docker.local" will point to the IP of the new image.
The wrapper was two minutes work, but it does work, and if you like you can find it here.
That concludes my notes on docker - although you can read articles I wrote on docker elsewhere.
Tags: docker
|
19 March 2018 13:00
I've been thinking about serverless-stuff recently, because I've been re-deploying a bunch of services and some of them could are almost microservices. One thing that a lot of my things have in common is that they're all simple HTTP-servers, presenting an API or end-point over HTTP. There is no state, no database, and no complex dependencies.
These should be prime candidates for serverless deployment, but at the same time I don't want to have to recode them for AWS Lamda, or any similar locked-down service. So docker is the obvious answer.
Let us pretend I have ten HTTP-based services, each of which each binds to port 8000. To make these available I could just setup a simple HTTP front-end:
We'd need to route the request to the appropriate back-end, so we'd start to present URLs like:
Here any request which had the prefix steve/foo would be routed to a running instance of the docker container steve/foo . In short the name of the (first) path component performs the mapping to the back-end.
I wrote a quick hack, in golang, which would bind to port 80 and dynamically launch the appropriate containers, then proxy back and forth. I soon realized that this is a terrible idea though! The problem is a malicious client could start making requests for things like:
That would trigger my API-proxy to download the containers and spin them up. Allowing running arbitrary (albeit "sandboxed") code. So taking a step back, we want to use the path-component of an URL to decide where to route the traffic? Each container will bind to :8000 on its private (docker) IP? There's an obvious solution here: HAProxy.
So I started again, I wrote a trivial golang deamon which will react to docker events - containers starting and stopping - and generate a suitable haproxy configuration file, which can then be used to reload haproxy.
The end result is that if I launch a container named "foo" then requests to http://api.example.fi/foo will reach it. Success! The only downside to this approach is that you must manually launch your back-end docker containers - but if you do so they'll become immediately available.
I guess there is another advantage. Since you're launching the containers (manually) you can setup links, volumes, and what-not. Much more so than if your API layer span them up with zero per-container knowledge.
Tags: docker, golang, serverless
|
29 April 2020 14:00
For the foreseeable future I'm working for myself, as a contractor, here in sunny Helsinki, Finland.
My existing contract only requires me to work 1.5-2.0 days a week, meaning my week looks something like this:
- Monday & Tuesday
- Wednesday - Friday
- I act as a stay-at-home dad.
It does mean that I'm available for work Wednesday-Friday though, in the event I can find projects to work upon, or companies who would be willing to accept my invoices.
I think of myself as a sysadmin, but I know all about pipelines, automation, system administration, and coding in C, C++, Perl, Ruby, Golang, Lua, etc.
On the off-chance anybody reading this has a need for small projects, services, daemons, or APIs to be implemented then don't hesitate to get in touch.
I did manage to fill a few days over the past few weeks completing an online course from Helsinki Open University, Devops with Docker, it is possible I'll find some more courses to complete in the future. (There is an upcoming course Devops with Kubernetes which I'll definitely complete.)
Tags: devops, docker, employment, golang, helsinki, open university, sysadmin
|
|