About Archive Tags RSS Feed

 

I jumped on the SSL-bandwagon

4 December 2015 21:50

Like everybody else on the internet today was the day I started rolling out SSL certificates, via let's encrypt.

The process wasn't too difficult, but I did have to make some changes. Pretty much every website I have runs under its own UID, and I use a proxy to pass content through to the right back-end.

Running 15+ webservers feels like overkill, but it means that the code running start.steve.org.uk cannot read/modify/break the code that is running this blog - because they run as different UIDs.

To start with I made sure that all requests to the top-level /.well-known directory were shunted to a local directory - via this in /etc/apache2/conf-enabled/well-known.conf:

Alias /.well-known/ /srv/well-known/

<Directory "/srv/well-known/">
    ForceType text/plain
    Options Indexes FollowSymLinks MultiViews
    AllowOverride all
    AuthType None
    Require all granted
</Directory>

Then configured each proxy to avoid forwarding that path to the back-ends, by adding this to each of the individual virtual-hosts that run proxying:

<Proxy *>
  Order allow,deny
  Allow from all
</Proxy>
ProxyPass /.well-known !
ProxyPass        / http://localhost:$port/
..

Then it came to be time to actually generate the certificates. Rather than using the official client I used a simpler one that allowed me to generate requests easily:

CSR=/etc/apache2/ssl/csr/
KEYS=/etc/apache2/ssl/keys/
CERTS=/etc/apache2/ssl/certs/

# generate a key
openssl genrsa 4096 > $KEYS/lumail.key

# make a CSR
openssl req -new -sha256 -key $KEYS/lumail.key -subj "/" -reqexts SAN \
   -config <(cat /etc/ssl/openssl.cnf \
   <(printf "[SAN]\nsubjectAltName=DNS:www.lumail.org,DNS:lumail.org")) \
   > $CSR/lumail.csr

# Do the validation
acme_tiny.py --account-key ./account.key --csr $CSR/lumail.csr \
  --acme-dir /srv/well-known/acme-challenge/ > $CERTS/lumail.crt.new

And then I was done. Along the way I found some niggles:

  • If you have a host that listens on IPv6 only you cannot validate your request - this seems like a clear failure.
  • It is assumed that you generate all your certificates in their live-location. e.g. You cannot generate a certificate for foo.example.com on the host bar.example.com.
  • If you forward HTTP -> HTTPS the validation fails. I had to setup rewrite rules to avoid this, for example lumail.org contains this:
    • RewriteEngine On
    • RewriteCond %{REQUEST_URI} !^/.well-known
    • RewriteRule ^/(.*) https://lumail.org/$1 [L]

The first issue is an annoyance. The second issue is a real pain. For example *.steve.org.uk listens on one machine except for webmail.steve.org.uk. Since there are no wildcards created a single certificate with Alt-names for a bunch of names such as:

  • ..
  • blog.steve.org.uk
  • start.steve.org.uk
  • ..

Then seperately create a certificate for the webmail host - which I've honestly not done yet.

Still I wrote a nice little script to generate SSL for a number of domains, with different Alt-Names, wrapping around the acme_tiny.py script, and regenerating all my deployed certificates is now a two minute job.

(People talk about renewing certificates. I don't see the gain. Just replace them utterly every two months and you'll be fine.)

| 10 comments

 

Comments on this entry

icon Andy at 23:09 on 4 December 2015

I did similar today also but hide everything behind haproxy as I have some web services that I can't run via Apache.

I use some simple rerouting in haproxy to divert all my host's letsencrypt auth requests into a single private virtual domain in Apache.

Only concern I have ATM is sometimes the cert renewal fails so automating it fully is a bit hit and miss.


icon Mihail Fedorov at 04:03 on 5 December 2015

Spent 2 hours writing custom wrappers for letsencrypt-auto. Only now found your post with link to tiny client. Thanks.

I love the way they handle SAN's (alternative names). You can put all your vhosts in just one cert - really, up to 100 names. No need to issue individual cers unless you want to hide one vhost.

icon Steve Kemp at 05:01 on 5 December 2015

Mihail: I'm glad it was useful!

Andy: I use HAProxy too on a couple of sites, and found that a source of pain - If you're using HAProxy as a load-balancer you have to hope the challenge is uploaded to the server that is hit. I'll need to do more work to cover this in the future - mirroring the secret directory, or forcing a canonical location, or similar.

icon Mihail Fedorov at 05:14 on 5 December 2015
https://fedorov.net

Also,

You'd better link .well-known/acme-challenge, not the .well-known folder itself. There are a lot of other things that use it: http://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml

icon Jelmer Vernooij at 09:54 on 6 December 2015
https://www.jelmer.uk/

FWIW the progress on IPv6-only support is tracked here: https://github.com/letsencrypt/boulder/issues/593

icon Ralf at 16:55 on 6 December 2015

If you forward HTTP -> HTTPS the validation fails.

I cannot confirm this. I have HTTP -> HTTPS redirects set up and they work just fine.

Maybe this is related to me already having "accepted" (StartSSL-signed) certificates for these domains previously. Which kind of cert did the HTTPS host in question have at the time of the check?

icon Steve Kemp at 21:05 on 6 December 2015
http://steve.org.uk/

The previous certificate was a previously issued LE one! I cannot reproduce myself now, because I've registered too many certificates for the domain in question - but I will try again in seven days.

icon andy at 10:44 on 7 December 2015

I've document my process of handling Letsencrypt certs for multiple domains here - http://blog.defsdoor.org/using-letsencrypt-ssl-certificates/

icon Paul Menzel at 11:33 on 7 December 2015
https://www.giantmonkey.de

Hi. Your recent efforts also affected your APT mirrors.

$ sudo apt upgrade […] E: The method driver /usr/lib/apt/methods/https could not be found.
N: Is the package apt-transport-https installed?

Commenting out your source URLs, running sudo apt upgrade and sudo apt install apt-transport-https, and activating your source URLs again, everything works now.

icon Steve Kemp at 11:36 on 7 December 2015
http://steve.org.uk/

Thanks Paul - I had the transport enabled already, so I hadn't noticed..