Serverless architecture is an approach that lets you build applications without the need for physical infrastructure. And usually, the first thing that comes to a developer’s mind when thinking about serverless is the Lambda functions.
Lambda is a Function as Service (FAAS) service – meaning you provide specialized short-running & focused code to Lambda, and the service will run it and bill you for only what you consume.
They appeal for several reasons, including their auto-scale ability and pay-per-use pricing system. But to take complete advantage of these features and fully benefit from such architecture, we need our other infrastructure components to be just as flexible.
Serverless architecture encompasses much more than AWS Lambdas. This article will cover the architecture and services used in a typical serverless application. This post is just a bird’s view, and we will not discuss each service in detail.
Our goal is to provide a stable, fully controlled system with a clear and comfortable developer experience. So we will go over each feature/service in the following paragraphs.
But first, let’s talk a little about microservices.
Microservices as Architecture Pattern
Serverless and Microservices are two different concepts: microservice is a way to design an application, while serverless is a way to run one. We all know that serverless is a model where code is executed on demand when specific actions are triggered.
On the other hand, microservices are the architectural pattern in which applications are split into a series of small services.
While they are different concepts, they work very well together. For example, getting rid of servers is a great way to eliminate operational complexity, while splitting the application into small decoupled services can be a great way to plan your application.
If you decide to use this kind of architecture, you will get a faster deployment and better scalability, and you can have teams develop and deploy microservice independently. Also, since these services are isolated, you will have better failure tolerance.
However, you need to be aware of some cons to this approach. Like the communication between services is complex, a global testing and debugging will become more complicated, and sometimes more microservices can mean more resources.
That being said, here is a diagram of a typical serverless application.
Asynchronous Service Communication
The first service we will cover is the Event Bridge, a serverless event bus. It simplifies the process of building event driver architecture while making it possible to connect various external sources or AWS Services.
Events can be triggered by internal lambda functions, external applications, or other AWS services, and they can also call AWS services or Lambda functions. So think of it as a central communication hub of our application.
When To Use Event Bridge :
- When building applications that react to events from other Aws Services or external applications
- Designing an application with decoupled services
- If you want to schedule specific actions – think of cron jobs on a classic server.
- Integrate your application with Other Saas providers
- Publish messages to many subscribers, and use the event data to match targets
Business API
We must communicate with its back end or a database to get or push data. To do this, we implement an AWS API Gateway and design custom methods and resources for each action.
Each API method will call a corresponding Lambda function that will perform different actions: updating a database, fetching data, serving some image path, etc.
The whole architecture is event-driven, which implies a prompt response for every user action. Therefore, DynamoDB can asynchronously call lambda functions to react to any possible change when needed. This second-row lambda function will also be able to send input to the EventBridge triggering additional events.
When to use AWS API Gateway
- It’s simple: when you need to develop an API. Since 99% of today’s apps require an API, you must be comfortable with this service.
Asynchronous tasks
When we have an API gateway triggering a lambda is called synchronous execution. Errors and retries have been handled within the client – in our case, the API gateway or the user interface.
Async invocation is typically used when an AWS service invokes lambda functions on your behalf- like when you upload an image on s3 and then s3 calls a lambda. S3 will immediately follow those lambda executions.
If there is an error, Lambda will handle the retry logic and reprocessing (it will retry between 0 and 2 times). In this case, the code needs to be idempotent – (it does not matter how many times you execute them, you achieve the same result.)
Any events that will not be processed (after the automatic retries) will be sent to a Dead Letter Queue. Later on, you can use this queue to diagnose the process.
In such applications, there can be numerous asynchronous tasks. The simpler will be a lambda triggering an SMS or email notification. A more complex one will be a file processing chain with multiple results (like encoding a video in more than one format). In this case, we could use an SNS topic to fan out to multiple SQS queues.
BackEnd to Front End
When using Asynchronous operation, a user on the front end will need data from the backend. For example, an asynchronous Lambda will dispatch an event using another Lambda. In this case, you will want to update the end-user by pushing data from the backend back to the front end.
The solution is to use API Gateway Websocket API. Then, your front end will open a communication line via WebSocket (a long-lasting, bi-directional communication channel).
This method will keep the WebSocket connection alive and only call your Lambda on messages.
FrontEnd (development)
Every application needs a front end. And if you need a tool to develop that, you could use Amplify. It will make the frontend development much more straightforward, and AWS has already integrated it with other platform services you may need.
Amplify is a CLI tool, an IaC tool, an SDK, and a set of UI components, among other things. In addition, the frontend JS SDK will be used to speed up integration with resources (such as Cognito for authentication).
While Aws Amplify may be the weapon of choice for many developers, there are other tools you can use to develop a front end. In the future, you will use the languages and patterns you are comfortable with or what the situation requires. But since this article is about AWS, we used the Aws Amplify.
When to use Amplify:
- When creating full stacks of applications or end-user interfaces
- When you want to connect dast with AWS resources
- For static websites or single-page web apps
Hosting, Storage, and Distribution
Every application or website needs a place for hosting the code files, images, etc.
In classic architecture, you will host those assets in a folder on your server’s hard disk. In a serverless architecture, we use AWS S3 as storage and expose those files through CloudFront (the AWS Content Delivery Network Service ).
AWS S3 is probably the most common AWS service, so I will only say a little about it. However, there are some things we need to underline regarding CloudFront.
Besides adding an extra layer of security (you can set access to your S3 files only via CloudFront ), CloudFront will improve the latency of your application and protect you in case of DOS attacks.
And there is another feature that you can take benefit from. It is Lambda@Edge, a CloudFront service that will let you run code closer to your user’s location and can help you increase app performance.
When to use S3
- Anytime you need storage – but look over various storage classes provided
When to use Cloudfront
- When you want to distribute your files over the world. A file closer to user location means low latency
- When you want to get a DDOS protection (for full DDOS protection you need other services as well)
- To limit the exposure of your S3 Buckets
- In some cases you can use CloudFront to provide / restrict access to the files to certain users
Domain and certificate
The domain names of your application will be handled by Route53 – the AWS DNS service. And since all applications should use an SSL certificate, we can generate one in the Certificate Manager and load it into Cloudflare.
Serverless Databases
In classic architecture, you will have a database installed on a server (sometimes on the same server that is used by the web application).
This approach has many disadvantages: underuse of your server, high maintenance, and a lot of work when you need to scale.
On Aws, you can go serverless on both SQL-compatible (Amazon Aurora Serverless ) or NoSql (Dynamodb) databases. And using one of these solutions, you will pay only for what you use without the need to maintain and configure a dedicated server.
These databases are automatically scalable, so you will not need to worry about that.
Amazon ElasticCache
You could use Amazon Elasticache, a fully managed, in-memory caching service, to speed up things. Using this caching system, you will access data in microseconds, get a faster application and make fewer database request
When to use AWS Elasticache
- When you want to boost performance and reduce latency (Always !)
- Reduce overhead of self-managed caching
- Reduce pressure on your backend database
Users and authentication
Almost all applications have a user registration and authentication feature. In classic architecture, the usernames & passwords are stored in a database, and registration/authentication is made via a database call.
While you can still do that, you have a new option when going serverless on AWS.
The service is called AWS Cognito, and besides the classic user management, it can help you with external identity provider integrations. So, instead of the username/password authentication method, you could let users use their Google or Facebook logins.
You also can use Cognito Pools & Authorizers with Api gateway routes to manage access to certain features of your application.
When to use AWS Cognito
- When you add user management to your app
- When you want to add sync functionality
File upload
A typical web/mobile application feature lets a user upload an image. While this is not a serverless service (it is more of a serverless technique), we should include it in this list.
In this case, we will use a feature of the S3 storage called signed URL. This URL is an upload URL that the front end of our application can use to upload the image to the correct S3 folder.
Additionally, you can set up an S3 event triggered when the new file is added and perform further operations.
Step Functions
There are situations when you need to create serverless workflows. And when you reach that moment, AWS Step functions are the go-to service.
This service is based on state machines (a workflow) and tasks (a state in the workflow).
The AWS step functions will let you coordinate other AWS services into serverless workflows for faster updates.
A lambda function calling another can become challenging if the application becomes more sophisticated. For example, instead of using a chain of lambda functions (a lambda that is called another lambda ), you could design a workflow that describes a series of tasks. And if there is a change in the flow, you could do extra work.
Each task could call a specific lambda function (or other services), so you will get a more straightforward way to manage the process.
Security
Identity and Access Management (IAM) is a global service and is the place where you can control access to the rest of the AWS services. It is a sophisticated service and can be hard to use at first, but it will give you the tools to protect every layer of your application.
Using all kinds of policies, you can allow application entities only the access they require.
When to Use IAM
This global service will be used to set access and permission for your entities. I will apply it to all architectures.
AWS System Manager Parameter Store and AWS Secrets Manager
One aspect of application security is how things like environment variables, API keys, databases passwords, etc., are stored and retrieved.
As a best practice, secret information should not be stored as plain text and not be embedded in your source code.
These two services have similar functionality, allowing you to centrally manage and secure your secret information.
To store your parameters or private information, you can declare key-value pairs on both services. Then, after you define them, you can read them in your application via AWS or SDK requests.
In this way, your secrets are not visible in the code, and you have a central place where you can manage/change/rotate without altering the code base.
When to Use
- Similar to AWS Iam, these are global services.
- When you need to secure your application
- Maintenance and protection of your secrets
Monitoring
The central monitoring service is Cloudwatch. It gives you system-wide visibility of your AWS resources and applications by monitoring AWS built-in and custom metrics. These metrics will provide insights into the operational health of your resources and in what way they are used.
Bedside monitoring Cloudwatch can help you send notifications, create alarms or automatically manage changes to resources you are monitoring when a threshold is breached.
Where to use
- Collect and track various metrics
- Collett and monitor log files
- Set alarms
- Trigger events as a reaction to changes in AWS resources