This is the last part of the three-article series about deploying WordPress on AWS using Terraform.
- Part 1 – Explaining the Architecture, how WordPress works and what needs to be done.
- Part 2 – Terraform code – explanations and code structure
- Part 3 – Deployment via Terraform Cloud and things you can do to improve the infrastructure
Github Repository for Terraform code: https://github.com/crerem/Sg12-published-wp-aws-terraform
As explained in the previous article, we structured our terraform code into modules and a folder per environment. In addition, there is a GitHub repository and a single main branch. Any code update and push to GitHub will trigger a “terraform apply” and deployment of new infrastructure.
This method of separating code into different branches and folders based on the environment is appropriate when there are significant variations between domains. For example, when deploying specific resources in a production environment, others in a testing environment, and still others in a development environment.
As a side note: There are alternative approaches to organizing code for different environments. One option is to have a separate repository branch for each domain (e.g., development, testing, and production branches on GitHub) rather than using different folders. Another approach is to use multiple repositories for a larger project. However, in this case, we stick with the single-branch model.
As we explained in the previous article, we chose to keep the state of the infrastructure in the Terraform cloud and not locally or in an AWS S3. We will log into our Terraform Cloud account to deploy our code and create a different workspace per module.
So, in Terraform Cloud
- Create a new Project and then a new Workspace
- For Workspace Workflow, choose a version control workflow and select GitHub.
- Connect to your Github account and choose the correct repository
- Open the advanced options and type your “Terraform Working Directory.” For the devel workspace, type the name of the devel folder, and so on
- Choose between manual and auto apply – if you are testing with code, you may want to choose “Manual apply” (you will need to confirm the actual “terraform apply” ).
- Create two variables with AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID with the correct AWS API connection details. Mark those as sensitive.
- Do a workspace per environment following the above rules.
After you create your workspaces, you can trigger a “run” from the Terraform Cloud interface or by doing a GitHub push. Start with the “devel” workspace and, if necessary, continue with “testing” and “production” ones.
Once the infrastructure is deployed, you can go and take the Load balancer DNS and call your new website. For the production website, you also need to set up the domain.
An installation screen will appear when you load one of the websites for the first time. Enter your email, the name of your new website, and the rest of the fields. Log in to administration, and you can start using the website . You may need to install a new theme and plugins and make other settings.
It would be best if you also considered using separate accounts for devel, testing, and production environments. There are several reasons for this :
- Keeping production and testing environments in separate accounts helps minimize the risk of unauthorized access or data breaches.
- Having separate accounts for production and testing allows you to track and manage costs for each environment separately. It will make it easier to identify and optimize expenses for each area.
- This model ensures that resources like computing and storage are isolated. You can prevent conflicts or production disruptions caused by testing and development activities.
You could deploy each environment in a different account by setting variable AWS_DEPLOY_ROLE in variable.tf
variable "AWS_DEPLOY_ROLE" {
default = "arn:aws:iam::xxxxxxxx:role/TerrafomSG12Wordpress"
description = "AWS role with the rights to deploy the infrascture"
}
In the above code, we set the default value for the AWS_DEPLOY_ROLE variable, the Arn of the role, with the proper permissions on that AWS account.
We conducted a test deployment and created a development and testing environment, resulting in two load balancers, each with a different name and DNS.
In the variables.tf file, we specified a desired capacity of two EC2 instances for the “testing” autoscaling group and one for the “development” autoscaling group. As shown in the screenshot, our code deployed three EC2 instances, one for development and two for the testing environment, each in separate availability zones.
How can we improve this infrastructure?
Please note that this code should be viewed as a starting point. While it is relatively close to being production-ready, additional steps must be taken to prepare it for production use.
For data security, we have protection in transit and at rest.
For “security in transit,” you need to install an SSL certificate (you can generate one in the AWS certificate manager ) into ELB. Also, you will need to change a setting in WordPress Admin.
You can enforce “security at rest ” by encrypting the sensitive content stored on your system. You can achieve that using AWS KMS.
For this deployment, the data stored in EFS will be public, so encryption is unnecessary. Of course, you can use the encryption on the RDS database, but if you keep only public information (like blog articles), you may skip this step.
Regarding network security – The database and EFS file systems are the sensitive points. That’s why we initialized them in private subnetworks. Also, we deployed security groups that limit access to these resources: the RDS database security group will allow access only from connections from the security group attached to WordPress instances.
ingress {
description = "Allow efs"
from_port = 2049
to_port = 2049
protocol = "tcp"
security_groups = [aws_security_group.SG_Allow_Wordpress.id]
}
ingress {
description = "Allow mysql in via 3306 from WordPress SG"
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = [aws_security_group.SG_Allow_Wordpress.id]
}
Extra step: You can deploy AWS WAF(Web Application Firewall ) on an elastic load balancer, which will protect you again various attacks like MySQL injection or cross-scripting.
The Elastic Load balancer is highly available and deployed on multiple availability zones. So it’s not considered a point of failure. We have the same situation for the autoscaling group: while for devel and testing situation, we can go with 1 EC2 into the Autoscaling groups, you should have more instances deployed for the production situation.
The most sensitive point of this architecture is the RDS database. In our case, the database is not deployed on multi-availability zones. So it would help if you addressed that. You could also use Amazon Aurora, which is deployed automatically on multi-AZ.
In case of high-volume traffic, you can deploy additional read replicas instances for the database that will handle some of the read traffic while the write traffic will go into the original instance.
Another solution will be to deploy an RDS proxy for the database. RDS Proxy improves performance by pooling and sharing database connections, caching credentials, and managing the connection lifecycle.
The proxy will reduce the number of connections an application must open and close and eliminate the need to reconnect to the database after a link is lost.
Also, deploying a caching layer using Redis or Memcached can further improve the performance and scalability of your application. This is because Redis and Memcached are both in-memory data stores commonly used as caching layers.
When an application requests data from a relational database, it can take significant time to retrieve it, especially if the data needs to be appropriately indexed or the query is complex. A caching layer can mitigate this problem by storing a copy of the data in memory to retrieve it much faster.
In the end, you should also consider deploying a CloudFront CDN distribution. The are a lot of WordPress Caching plugins that work directly with Cloudfront, and it will improve your website performance.
There are plenty of things that you could do to improve this architecture. You must consider your project needs, traffic situations, and budget. However, we hope this article series offers you a good starting point.