How to deploy a highly available & scalable WordPress website on AWS using Terraform – part 2

This is part two of the three article series about deploying WordPress on AWS using Terraform.

Github Repository for Terraform code: https://github.com/crerem/Sg12-published-wp-aws-terraform

Now that we have discussed the general structure of our AWS WordPress website, it is time to move on to the actual deployment process.

There are multiple ways to accomplish this, including using the AWS console or the AWS Command Line Interface (CLI). However, the most efficient and effective method is using Infrastructure as Code (IAC) applications such as Terraform or CloudFormation.

Using these tools, we can take advantage of the benefits of IAC, such as version control, automation, and easy scaling.

Terraform, and CloudFormation are popular IAC tools that allow us to define the infrastructure for our WordPress website in a simple and readable format, such as YAML or JSON. This code can then be versioned, saved in a GitHub repository, shared, and reused across different environments and teams. Additionally, these tools allow us to automate the creation, update, and deletion of resources and configurations, reducing the risk of human errors and saving time.

Our Terraform project structure looks like this.

── modules
├── compute
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── database
├── storage
── devel
└── config.tf
└── main.tf
└── outputs.tf
└──provider.tf
└──variables.tf
── production
── testing

We separated our code into modules, and each module has its folder. A Terraform module is a container for multiple resources that are working together. They helped you create lightweight abstractions so you could describe your infrastructure in terms of architecture.

In this way, you avoid creating a monolith application by putting all your code in the main.tf, and you can reuse the modules according to your needs.

Besides splitting our code into modules, we separate each environment into its folder or a separate repository. For this project, we decided to have a folder for each situation: development, staging, and production.

Now, let’s look over the Terraform code. You can find the code on this link.

We start with variables.tf. In this file, we defined a series of variables that will help us manage our deployment: AWS region, the CIDR of the VPC and its subnets, AMI, and EC2 instance type.

Each deployment folder has variables.tf file and we can configure different types of resources per environment. For example, we use a t2.micro for the testing environment, with the autoscaling group having only one instance.

But for the production environment, you may need a larger EC2 and more instances in the autoscaling group. So, you set different values per environment.

You can do the same thing for the MySQL database: for development and testing, you use a t3.micro, while for production, you can use a bigger one.

We continue with the main.tf in the devel folder. First, we define some secure parameters: the database user and password. You will have to manually declare these in your AWS console (if there are not already given to you by an administrator )

Now, if you manage this kind of data, you always have to treat it as sensitive. If you keep the state file local, you may notice that the system saves the passwords in plain text. Keeping the code local may not be a good idea.

When using a remote state with Terraform, the state is not saved to the local disk. Additionally, specific backends allow for encrypting the state data when it is not in use.

You can use Terraform cloud, which always encrypts the state and protects with TLS when data is in transit. Or you can use an S3 bucket inside your AWS account.

We decided to use the Terraform Cloud option, and you can enable it by adding this code (see provider.tf)

cloud {
organization = var.TF_CLOUD_ORGANIZATION

workspaces {
name= var.TF_CLOUD_WORKSPACE
}
}

But more on this in the last part of this series.

After we added the security parameters, we implemented our modules. In our case, the modules are “compute,” “database,” “load-balancer,” “network,” “security,” and “storage.”

For each module, we have a variable.tf file (in there, we define the variables used inside the module’s main.tf) and outputs.tf (we define some return variables for each module). A module output can become another module input.

For example, for module “compute,” we have an input variable called SG (security group ), and we define this via


SG = module.security.SG_Allow_Wordpress.id

The value is the ID of a security group defined in the security module. Then, when we do the “terraform apply” command, the system will check the dependencies and know how to deploy each resource to satisfy each dependency.

A few words on the actual modules

On the “compute” module, we defined a launch configuration that an autoscaling group will use. In addition, a Cloudwatch metric alarm will be used by the Autoscaling group when adding or removing EC2 instances.

On the launch configuration, we have the bootstrap script (see “user data”). This script will edit the wp-config.php file (This is the wordpress file that holds the database connection details) and replace the strings like “localhost” and “database_name_here” with the actual database host string (provided by the database module) and the value of the SSM parameters for database username and password.

After that, it will mount the Elastic File system (when we created the AMI, we installed efs-utils ) and give apache user permission for the wordpress files.

We define the VPC and nine subnets on the network module: Three public subnets, three application subnets, and three database subnets. The Load balancer and the EC2s will be used on the public subnets(also public subnets), while the EFS will be on private application and database subnets.

We defined the security groups on the security module, and here you can remark that the security group for the Application Load Balancer and WordPress Ec2 will receive connections only via ports 80 and 443(although we don’t have an SSL certificate installed, and HTTPS will not work).

The database security group will only receive traffic front the wordpress security group via port 3306.

We also define an instance profile and a role used by the Ec2 instances.

We attached these policies to the role.


"arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy",
"arn:aws:iam::aws:policy/AmazonSSMFullAccess",
"arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM",
"arn:aws:iam::aws:policy/AmazonElasticFileSystemClientFullAccess"]

The storage, load balancer, and database modules are pretty simple. You could look into output.tf to see what values we “export” after deploying the infrastructure.

The last article of this series will explain how we deploy this setup and how we can future improve this setup.

Share this:

Related Posts