How to build a Secure Cross-Account Continuous Delivery Pipeline for an AWS SAM application

When deploying applications in a cloud environment, it is considered a good practice to use multiple accounts to separate different stages. There are several reasons for adopting this approach.

  • Resource isolation: each AWS account will act as a separate isolated environment. This isolation helps prevent one application or team from accidentally interfering with or impacting the resources of another. It’s especially useful when multiple teams or applications share the same AWS services.
  • Security Isolation: If one account is compromised, it minimizes the potential attack surface for the rest of the accounts, reducing the risk of unauthorized access to critical resources.
  • Access Control:(IAM) can be configured differently for each account, enabling precise control over who can access resources and services within each environment
  • Easier cost management and better disaster recovery plans.

So, here at SG12, we follow this recipe, and for each application we are developing, we do this cross-account deployment. Today’s blog post will explain how we do that for serverless applications developed with SAM(AWS Serverless Application Model).

First, we use AWS organizations to create a new Organization Unit (OU) inside the main account. This approach is beneficial if you have multiple applications hosted on AWS. You want to avoid ending up with one big account with 100 lambdas and ten databases. So, separate each application using AWS Organizations.

Another good practice is to implement a service control policy (SCP) per application and attach that to the organization unit. If they are unfamiliar, you can read more about them here https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html .

The idea is to have a first level of account security in place: For example, do not let accounts under that organization have the option to perform unnecessary actions. If you are doing a serverless application, you don’t need to spawn EC2s – so do not allow that.

The next step is to create the accounts used in our deployment. We split them into The Control Plane and the Data Plane.

In the Control Plane, we have a so-called tooling account, while in the Data Plane, we will have at least one pre-production/testing account and a production one. You can add a new account for each stage of your pipeline.

Now, we’d like to automate the setup of both the Data Plane and Control plane resources because they are essential for building and deploying the application.

A basic workflow for our setup will be like this.

Implementing such architecture starts by defining who can do what and describing that with code.

The sample code for this article can be found here: https://github.com/crerem/Secure-Cross-Account-Continuous-Delivery-Pipeline.

We have the IAM Roles – we will deploy those into the tooling account and in the testing & production accounts. We configure them to establish trust between target accounts (Data plane accounts ) and Trusting accounts (Control Plane).

Once this is done, we will need to do the pipeline orchestration. We are using AWS CodePipeline to orchestrate the stages and the AWS Code Build to take the SAM template and build it into a Cloudformation template. And, of course, we will need to feed our pipeline with code.

The SAM application will have a code repository – in this case, we use GitHub. The development team will push the code into that repository, and when this happens, our AWS CodePipeline will read it, build it with AWS CodeBuild, and deploy it on the testing and production account.

To deploy this environment, we create a deploy.sh script that can be found in the project repository. When you are ready, execute the script via the terminal. But before doing that, there are some things you need to do

  • Create the GitHub repository where the code for the SAM application will be stored, and get a GitHub connection-token.
  • Since this article is about a pipeline for a SAM app, you may want to create a SAM application (we used Python) and connect that to the previously created repository. We will not cover this aspect, but you could deploy a sample Sam serverless app that uses Lambda and an API for testing reasons.
  • Create the tooling, testing, and production accounts, and create the profiles on the machine from where you will deploy the pipeline. 
  • edit the deploy.sh and add your GitHub and account details

Note: we intentionally use broad permissions (e.g., cloudformation:*) for this demo code. We recommend not using the code in this format but only giving the strictly required access. 

Our solution consists of three Cloudformation templates that define the roles we need, create a KMS encryption key, an S3 bucket, and the pipeline to deploy the code.

If you look at the install script, you may notice that we call twice the step-one-control-plane.yaml and step-three-pipeline.yaml cloud formation stacks.

We did that because there is a circulatory dependency between the roles and the pipeline. The pipeline needs the ARN of the roles from step-two-control-plane.yaml, which needs the role and policies defined in step-tree-control-plane.yaml and step-one-control-plane.yaml.

The step-one-control-plane.yaml

The first Cloudformation template we will deploy is the step-one-control-plane.yaml. Via code, we declare the KMS key, a key alias, and an S3 bucket, where we will deploy the code artifacts.

A special note on the S3 bucket: This must be encrypted because we use it in a cross-account enviroment. For encryption proposes, we created our own KMS key because the default key cannot have a change in policy.

We need to change the policy because we must give the target account access to use the key to encrypt /decrypt objects that are in the artifact bucket. 

The step-two-cross-accounts-roles.yaml

The step-two-cross-accounts-roles.yaml is the template that creates the roles for the testing and production accounts.

The ToolingAccountPipelineCloudformationRole will be the one that creates Cloudformation templates. The permissions are too open, so you may want to adjust that. 

For ex : Instead of


- cloudformation:*

You may want to use 

- cloudformation:CreateStack
- cloudformation:DescribeStack*
- cloudformation:GetStackPolicy
- cloudformation:GetTemplate*
- cloudformation:SetStackPolicy
- cloudformation:UpdateStack
- cloudformation:ValidateTemplate

The CloudformationDeployerRole is the one that will deploy our application resources and can be assumed by the CloudFormation service.

You may need to adjust the permissions for this role. This sample code allows for all actions on lambdas and API gateways, and you should limit those to the required permissions only. Also, you may need to add the permissions for your other services – like step functions. This sample code assumes you work with the SAM app that only uses lambda and API. 

  Action:
  - lambda:*
  - iam:*
  - cloudformation:*
  - apigateway:*

Note: we also used – iam:* because we need to allow IAM to create new resources, which is part of creating the API gateway and lambda actions.

The step-three-pipeline.yaml

The last Cloudformation template will deploy will be the actual pipeline.

The code is pretty straightforward – we create an AWS::CodePipeline::Pipeline with its role & policy and an AWS::CodeBuild::Project, again with a role and policy.

The pipeline will have several stages: the first will be to connect to GitHub and pull the new code, while the second stage will build the code with the help of CodeBuild. You could write the build instructions directly on the template or use the buildspec.yaml file. 

We choose the second method, and you will find in the GitHub the buildspec.yaml file we use. You must copy that in the root of your SAM application; otherwise, this build stage will fail.

The buil commanded are the following. 

- echo "Starting SAM packaging `date` in `pwd`"
- pip install --upgrade pip
- pip install pipenv --user
- pip install awscli aws-sam-cli
- pip install -r requirements.txt
- sam build
- sam package --template-file .aws-sam/build/template.yaml --s3-bucket $ArtifactBucket --output-template-file packaged-template.yml --region $AWS_REGION

Since our application is Python – we install pip and any modules from the requirements.txt and then use the “sam build” command.

Ultimately, we use “sam package” to deploy the new artifacts in our S3 bucket.

Special note: the built artifacts will be in the .aws-sam/build folder, so if you adapt this code to other non-SAM applications, you may need to change that.

The following stages will create the change sets and deploy the change sets to the testing and production accounts. Between, there is a manual approvment stage.

Looking over those Deployment stages, you may ask: Why there is one role for “Configuration” and another for “Action”?

        - Name: DeployToTest
          Actions:
            - Name: CreateChangeSetTest
              ActionTypeId:
                Category: Deploy
                Owner: AWS
                Version: 1
                Provider: CloudFormation
              Configuration:
                ChangeSetName:  !Sub ${ProjectName}-changeset-test
                ActionMode: CHANGE_SET_REPLACE
                StackName:  !Sub ${ProjectName}-stack-test
                Capabilities: CAPABILITY_NAMED_IAM
                TemplatePath: BuildOutput::packaged-template.yml
        
                RoleArn:
                  Fn::If:
                  - AddCodeBuildResource
                  - !Sub arn:aws:iam::${TestAccount}:role/${ProjectName}-CloudformationDeployerRole
                  - !Ref AWS::NoValue
              InputArtifacts:
                - Name: BuildOutput
              RunOrder: 1         
              RoleArn:
                  Fn::If:
                  - AddCodeBuildResource
                  - !Sub arn:aws:iam::${TestAccount}:role/${ProjectName}-ToolingAccountPipelineCloudformationRole
                  - !Ref AWS::NoValue

The CloudformationDeployerRole

The role specified in the Configuration section (under Configuration -> RoleArn) is typically used for the AWS CloudFormation service when it interacts with your AWS resources during the stack creation/update process.

This role is responsible for the CloudFormation deployment actions, such as creating or updating a CloudFormation stack, and it often requires permissions to perform actions like creating IAM roles, security groups, Lambda functions, etc.

It also needs permissions to implement the infrastructure changes defined in your CloudFormation stack and will be passed through to execute the Cloudformation on our behalf in the target account.

The ToolingAccountPipelineCloudformationRole

The role specified in the Action section is used by AWS CodePipeline when it performs the specific Action defined in your pipeline.

This role often has more specific and limited permissions than the CloudFormation role. It’s used to execute the CodePipeline action, such as invoking AWS Lambda functions, running AWS CodeBuild projects, or interacting with other AWS services as part of your pipeline.

Separating these roles allows for a more granular and secure permission model. You can limit the permissions of the CodePipeline action role to only what it needs to do, reducing the risk of unintended actions. Meanwhile, the CloudformationDeployerRole role has the necessary permissions to create and manage AWS resources as defined in your CloudFormation template.

After we run the deployment script, it will take a few minutes until all resources are deployed. During that time, you can watch the Cloudformation interface in those tree accounts and see how the resources are created. 

In the end, our pipeline looks like this. 

This is how a simplified CI/CD pipeline for a Sam application looks like. You can notice the thin line between the Continuous Integration and Deployment phases: The source and the build stage was the continuous integration, and the deployment on all the environments in this demo was the deployment phase.

The bigger picture 

No matter how big or small an organization is, you want to have AWS accounts configured with a set of guard rails by the Security/Management team.        

You also want to automate the creation of resources that constitute the governance layer. And you want to do this in an automated fashion because machines are better than humans at this kind of things.

For a simple organization, you have an account structure that looks like this:

  • A parent account/tooling account
  • One or more business unit accounts

You start with the Control Plane, create security control policies, and automate the build, test, and deployment phases. Treat these resources as an application. Ideally, we should also deploy more than our application code: besides role and policies, we can add security controls like AWS Config, Aws GuardDuty, etc.

You can have a company policy that says all S3 buckets should have version enabled. Via the pipeline, you deploy the Aws Config rules to test this and notify an acceptance/security team when that code reaches the sandbox account.

The acceptance/security team will review, accept, or deny the deployment. The idea is to treat the security controls as part of an application, not something separate. 

The sample code for this article can be found here: https://github.com/crerem/Secure-Cross-Account-Continuous-Delivery-Pipeline.

Share this:

Related Posts