About Archive Tags RSS Feed

 

Changing my stack ..

22 February 2014 21:50

For the past few years I've hosted all my websites in a "special" way:

  • Each website runs under its own UID.
  • Each website runs a local thttpd / webserver.
  • Each server binds to localhost, on a high-port.
    • My recipe is that the port of the webserver for user "foo" is "$(id -u foo)".
  • On the front-end I have a proxy to route connections to the appropriate back-end, based on the Host header.

The webserver I chose initially was thttpd, which gained points because it was small, auditable, and simple to launch. Something like this was my recipe:

#!/bin/sh
exec thttpd -D -C /srv/steve.org.uk/thttpd.conf

Unfortunately thttpd suffers from a few omissions, most notably it doesn't support either "Keep-Alive", or "Compression" (i.e. gzip/deflate), so it would always be slower than I wanted.

On the plus side it was simple to use, supported CGI scripts, and served me well once I'd patched it to support X-Forwarded-For for IPv6 connections.

Recently I setup a server optimization site and was a little disappointed that the site itself scored poorly on Google's page-speed test. So I removed thttpd for that site, and replacing it with nginx. The end result was that the site scored 98/100 on Google's page-speed test. Progress. Unfortunately I couldn't do that globally because nginx doesn't support old-school plain CGI scripts.

So last night I removed both nginx and thttpd, and now every site on my box is hosted using lighttpd.

There weren't too many differences in the setup, though I had to add some rules to add caching for *.css, etc, and some of my code needed updating.

Beyond that today I've setup a dedicated docker host - which allows me to easily spin up containers. Currently I've got graphite monitoring for my random hosts, and a wordpress guest for plugin development/testing.

Now to go back to reading Off to be the wizard .. - not as good as Rick Cook's wizardry series (which got less good as time went on, but started off strongly), but still entertaining.

| 2 comments

 

Comments on this entry

icon Krister Brus at 11:10 on 23 February 2014
http://www.adelo.se

I may be missing something in your configuration, but nginx does in fact support "old-school plain CGI scripts" via wrapper script fcgiwrap. Install the package fcgiwrap, include one file in the nginx setup, and nginx is ready to serve scripts, for example http://example.com/cgi-bin/hello.cgi. Very easy and it works well.

icon Steve Kemp at 11:39 on 23 February 2014
http://www.steve.org.uk/

What I meant was that nginx doesn't support native CGI scripts.

You can get CGI via fastcgi, or the fcgiwrap tool you mention, but that means running a second daemon for each user - since I want all CGI scripts for a site to run as that sites's UID. (i.e. One compromised site cannot read/mess-with another.)