About Archive Tags RSS Feed

 

Redesigning my clustered website

7 February 2016 21:50

I'm slowly planning the redesign of the cluster which powers the Debian Administration website.

Currently the design is simple, and looks like this:

In brief there is a load-balancer that handles SSL-termination and then proxies to one of four Apache servers. These talk back and forth to a MySQL database. Nothing too shocking, or unusual.

(In truth there are two database servers, and rather than a single installation of HAProxy it runs upon each of the webservers - One is the master which is handled via ucarp. Logically though traffic routes through HAProxy to a number of Apache instances. I can lose half of the servers and things still keep running.)

When I setup the site it all ran on one host, it was simpler, it was less highly available. It also struggled to cope with the load.

Half the reason for writing/hosting the site in the first place was to document learning experiences though, so when it came to time to make it scale I figured why not learn something and do it neatly? Having it run on cheap and reliable virtual hosts was a good excuse to bump the server-count and the design has been stable for the past few years.

Recently though I've begun planning how it will be deployed in the future and I have a new design:

Rather than having the Apache instances talk to the database I'll indirect through an API-server. The API server will handle requests like these:

  • POST /users/login
    • POST a username/password and return 200 if valid. If bogus details return 403. If the user doesn't exist return 404.
  • GET /users/Steve
    • Return a JSON hash of user-information.
    • Return 404 on invalid user.

I expect to have four API handler endpoints: /articles, /comments, /users & /weblogs. Again we'll use a floating IP and a HAProxy instance to route to multiple API-servers. Each of which will use local caching to cache articles, etc.

This should turn the middle layer, running on Apache, into simpler things, and increase throughput. I suspect, but haven't confirmed, that making a single HTTP-request to fetch a (formatted) article body will be cheaper than making N-database queries.

Anyway that's what I'm slowly pondering and working on at the moment. I wrote a proof of concept API-server based CMS two years ago, and my recollection of that time is that it was fast to develop, and easy to scale.

| 8 comments

 

Comments on this entry

icon Ruggero at 11:30 on 7 February 2016

Have you thought about adding a caching layer (varnish) in front of your apaches?

icon Csillag Tamás at 11:37 on 7 February 2016
http://cstamas.hu

Sounds interesting, still you should use mojolicious (Mojolicious::Lite to be precise) for the backend ;-)

icon Steve Kemp at 11:44 on 7 February 2016
https://www.steve.org.uk/

Csillag: The proof of concept code I linked to above does indeed use that :)

Ruggero: I have considered that several times, but flushing things is hard unless you blow away the whole cache on-change.

e.g. If a new comment is posted you need to invalidate: The user's profile page, the article page, the list of recent comments, etc.

I've not setup suitable headers to handle those cases. Though certainly caching all anonymous-page-views for five minutes, and that level of simple behaviour is simple enough. I kinda think if I'm going to do caching I should do it properly.

icon Marek at 12:34 on 7 February 2016

All I can see when I look at the new design is, "why have one single point of failure, when you can have three?!"

I'm sure you've got more than one haproxy at each end?

And, painfully bad though it is, MySQL replication at least to a semi-warm slave?

icon Marek at 12:35 on 7 February 2016

(for clarity: I think my problem is with the diagram rather than the explanation)

icon Micha at 12:40 on 7 February 2016

Hi Steve,

I wonder whether you add this additional layer of indirection just "because you can" or whether you're trying to address a particular performance bottleneck that you could measure (you didn't wrote anything about that). And how did you account for the additional complexity of your setup in the equation during your decision?

Regards, Micha

icon Steve Kemp at 12:47 on 7 February 2016
https://www.steve.org.uk/

There are more moving parts, but each is logically simple. I'd expect that there would be at least two instances of HAProxy instead of the one pictured - taking over a shared IP address via ucarp, or similar.

So there would be points of failure, but not a single one.

icon Steve Kemp at 12:49 on 7 February 2016
https://www.steve.org.uk/

Micha: I'm looking to simplify things - believe it or not - the current tangle of CGI is making it hard to optimize the database or change the implementation (i.e. moving to postgresql, etc)

Half the goal here is to untangle the implementation, and ensure we have clean separation between the front-end and the back-end. Although I've no interest in pure javascript it would be possible to read articles just by making AJAX requests directly to the api-server, and that's something that I'd be interested in exploring.

As for performance? Right now it is acceptable. If it slows I can grow by giving more RAM to the MySQL, or by adding new Apache-boxes, but I think the near approach would scale better because I certainly expect caching at the API-layer.