How to deploy a highly available & scalable WordPress website on AWS using Terraform

The inspiration for this collection of articles comes from the experience of working on an eCommerce website that experienced a surge in traffic during promotional events. Our scope was to migrate the website from a single server to a high-availability solution deployed on AWS. 

Github Repository for Terraform code: https://github.com/crerem/Sg12-published-wp-aws-terraform

The code presented in these articles is close to the actual solution, but it only shows a portion of the work done, and there are a few differences that are particular to this client and not exposed in this article.

The client built the website using WordPress, the most popular CMS. The programming language is PHP with an MYSQL database.

The WordPress CMS has some particularities:

  • The application PHP code is in wp-includes,wp-admin, and root folders.
  • The CMS uses a theme and plugin system. These files are stored in a wp-content/plugins and wp-content/themes folder.
  • The admin interface allows for uploading both videos and images, which will then be stored in the wp-content/uploads folder.
  • The data (articles, page content, blog posts ) are stored in a database.
  • The Mysql database is installed by default on the same machine as the web server and storage area.

The solution consists of a two-layer architecture deployed on AWS over multiple availability zones.  

The first layer will consist of “compute” resources: a load balancer, an autoscaling group, and EC2 Instances. Next, the “data” layer will be composed of an RDS instance running a MySQL server.

Here is the architectural diagram

The entry point is an Elastic Load Balancer that will balance & send the traffic to a series of EC2 instances. On these Ec2 instances, we have WordPress installed, and in case we have a “high traffic” situation, we can quickly scale up by deploying new machines. 

We do that by deploying an autoscaling group that watches the CPU load of these instances and adds or terminates machines according to the load.

When we deploy new Ec2 instances, we use a launching template that specifies the EC2 type and AMI(Amazon Machine Image). There is also a small bootstrap script that will run when we start the machine. This script will perform last-minute configurations.

We must treat these machines as temporary resources. When the application is in high demand, we will have many instances, but when things are calm, the system terminates the excess power, and we can end up with only one EC2. 

Because of this volatility, it means we cannot store persistent data on EC2. The solution to this problem is to deploy an Elastic File System and mount that on each of the EC2. 

Afterward, we link the WordPress ‘wp-content’ folder to the EFS. This action guarantees that all media files, plugins, and themes will be stored outside the EC2 instance.

This way, when we terminate an EC2 machine, we will not lose the plugins, themes, or media uploaded. And all of the EC2 deployed will share this EFS and have access to the same data.

Following the same principle, we create an RDS instance that runs the MySQL server. Again, the PHP application on Ec2 will connect to this database, and the system will not save data locally.

Since we want our application to be highly available, we will deploy our resources on multiple availability zones.

One last thing about WordPress and EC2 – When we deploy the EC2, we use a “golden AMI” image. This AMI is an image of a web server (apache) where we deployed the PHP files and did some configurations.

How do you create one? 

In a few words:

  • Deploy an EC2 instance (a t2.micro is just fine ). For our case, we use AWS Linux.
  • We installed Apache, MySQL, and PHP
  • We installed WordPress in the /var/www/html/ folder.
  • Made PHP configurations like upload_max_filesize, memory, install the GD library, and other settings.
  • We did a yum update and wrote a short bootstrap script that will run every time we start an EC2 ( we will provide more explanations when we reach that point)

Consider this Ec2 as a micro server where you set up everything as you like. Then, when everything works as expected and you finish all the PHP settings, you can uninstall MySQL and create a new AMI image.

When we deploy this architecture, we can access our new WordPress Website by calling the ELB DNS name. As a side note, you will need to integrate an SSL certificate with your load balancer.

To integrate an SSL certificate with an AWS load balancer, you can follow these steps:

First, Obtain an SSL certificate from a trusted certificate authority (CA) or use a certificate from AWS Certificate Manager (ACM).

Then, go to the Listeners tab for the load balancer, select the HTTPS protocol, and choose the SSL certificate you obtained in step 1, either from the CA or ACM.

Configure the load balancer to forward incoming HTTPS traffic to the desired target group and, in the end, update the DNS records to point to the DNS name of the load balancer.

Note that if you use the ACM certificate, you don’t need to upload the SSL certificate. Instead, it will be automatically associated with the LB. 

Once you’ve completed these steps, all traffic sent to the load balancer on port 443 will be encrypted using the SSL certificate and then forwarded to the target group.

While this architecture is scalable and highly available, you can do many extra things to improve it. For example, you can add a CDN, RDS proxy, Caching, etc. We will give you additional information at the end of the series.

The following article explains how we developed this infrastructure using Terraform code. 

Share this:

Related Posts