Status Quo & Requirements Analysis
About a year ago, my day-job covered a relaunch of our server cluster and its underlying architecture, so I used this as motivation to also invest some time and take full control of my server infrastructure, replacing my DreamHost Web and DB VPS instances with root servers.
First of all I compared a few hosting companies, including my former provider DreamHost with their new product line DreamCompute, which they launched recently. Unfortunately my Munin monitoring and some tests I conducted with iftop resulted in many unexplainable network errors. In addition, DreamCompute’s block storage, which they’ve realized with Ceph, was – obviously – slower than a local SSD RAID array. So I decided to move my stuff to another provider.
Derived from previous tests with the DreamCompute root-instances, two machines, the first one with 1GB and the second one with 2GB RAM, should suffice for my requirements. One of them was, that – as far as possible and necessary – all I/O intensive operations (like the generation and delivery of HTML content and related assets as well as DB operations) should be conducted in RAM, e.g. shared memory, PHP OPCache, InnoDB buffer, etc.
At that time I had a few machines with OVH, but they had been down for at least a few minutes up to half an hour a day. So that was not an option, though they are useful for development or for playing around.
DigitalOcean, RamNode and a few other players did not offer plans that matched my needs.
After doing more research, I had another look at Linode. These guys had already some history in this business, as their first blog post dated back to 2003. Regarding a decent hack at Xmas 2015, they were very forthcoming about their past omission and their onward plans to better protect their infrastructure, which did increase my trust in their endeavor.
Until last year Linode provided only a $20 plan, so I didn’t take them into consideration for my private projects at first. I was happy to read that they had decided to change that in summer 2016 and offer
- one core (2.3 – 2.5 Ghz Intel Xeon E5 depending on the data-center and host machine)
- 2GB Ram
- 25GB SSD storage (30GB since February 2017)
- 2TB outbound transfer volume (lately w/ 1Gbps )
for $10. In addition you can also purchase a backup plan for $2.5 for one of these instances.
So I signed up for two machines, one in the US and one in Europe. The first one would be used for web delivery with
- NGINX for static assets and as reverse proxy
- Apache 2 as upstream/backend server
- PHP-FPM 7 as middleware
- MariaDB 10 as database
in an attempt to unite NGINX and Apache in order to gain the best of both worlds.
Of course, it’s possible to leave Apache out of the stack, but some WordPress plugins which I need still make use of .htaccess-files especially for security reasons and I didn’t want to change and maintain all that code by myself. Moreover, I enjoy an extra security layer with the 6G Firewall.
The second instance was meant to serve as monitoring server and rsync backup storage.
My OS of choice was Ubuntu 16 LTS (Xenial Xerus) because of
- OpenSSL version 1.0.2g for its ALPN support, which in turn is necessary for HTTP/2
- the packaged NGINX which comes with HTTP/2 support.
I was especially looking forward to giving NGINX a whirl.
NGINX: small, but very powerful and efficient web server and mail proxy
While I have been working with Apache2 for close to two decades now, I’ve always wanted to try NGINX. The location-settings and their priority might sometimes be a bit tricky when covering exemptions. Anyway, its official HTTP/2 support, low memory footprint and performance were all worth a try.
Third-party services and other daemons
Now having all the possibilities and functionalities of a full and up-to-date Debian-based machine, I deployed some helpers like Monit and wrote a few dozens scripts to monitor and automate as much of the maintenance and update work as possible.
For example, replacing my GeoTrust certificates by self-updating and free Let’s Encrypt certificates turned one of the most cumbersome processes into a fully-automated security add-on.
In addition, I was able to satisfy my security concerns with UFW (Uncomplicated Firewall), Fail2ban (which unfortunately does not cover IPv6 functionality at the time of this post), SSH key-based authentication, restricted users and file permissions, a separate read-only user for web delivery, etc.
Finally, it was also fun to combine figlet and cowsay for artistic notification mails.
Two-node content delivery aka Mini-CDN
The traffic on my sites is somewhat unevenly distributed geographically, so some KeyCDN’s edge nodes are hardly requested by my visitors. It seems that assets of these nodes are removed after a certain time and thus need to be retrieved from the origin server again. In case of HTTPS, which is necessary for HTTP/2 in most browsers, this retrieval would also include two TLS-handshakes, i.e. browser ↔ CDN’s edge node ↔ origin server.
While reading up on my subscribed feeds, I got the idea for a mini-CDN from a blog post at NS1. Their intelligent anycasted DNS service is very impressive and they also offer a free trial where you can add two monitors, one for each CDN-node, to redirect traffic the other server if one of them is down.
Though, to minimize round-trip times and latency for users, the NS1’s main USP is the ability to geo-load balance traffic by IP addresses. This can be done on continent or even country level.
So first of all, I used rsync to distribute CDN assets to the second server, which I had previously only utilized as monitoring and backup server. Following the setup of NGINX and the monitoring as well as load-balancing rules on NS1, I was able to deliver content like images, videos and HTML-related assets via my CDN.
To fully use my own CDN’s power, I also deployed FlickrMovr which now retrieves all image content for my sites from Flickr.
After conducting more tests and watching the traffic, I could downgrade the second node to the $5 instance Linode offers since February 2017. This machine comes with 1GB Ram which is easily enough for my NGINX and Munin requirements.
I hope you enjoyed these insights into my web infrastructure,
Leave a Reply
You must be logged in to post a comment.