You can read the first part of this article here: https://sg12.cloud/aws-vs-azure-vs-google-cloud-platform-networking-concepts-and-services/
Network Peering
A network peering or VPC peering is a networking connection between 2 networks/VPCs, allowing resources on either side to communicate with each other like they are on the same network.
All three providers support peering between logical networks (VPC or VN); however, transitive peering is not allowed. So if network A is connected to network B and network B is connected to network C, network A will NOT be able to communicate with network C via network B. You will need a dedicated connection between networks A and C.
The network peering solution works only if you have a small number of networks and it scales poorly. You will need to create extra connections between each network introduced.
The solution is a hub and spoke architecture that allows multiple networks to be connected with a single connection.
Aws
The AWS solution is called AWS Transit Gateway, a network device that acts as a central hub, connects VPCs inside a region, and allows traffic to flow between them. If multiple regions are involved, inter-region peering connects the transit gateways.
The Transit gateway acts like a highly scalable router, and VPC can also help you connect on-premise networks.
In AWS Transit Gateway, each attachment (VPC, VPN, Direct Connect gateway, or another via peering) can be associated with one of the Transit Gateway’s route tables. This allows for granular control over traffic routing entering the Transit Gateway from different attachments.
Inside the transit gateway, a route table can work with both types of internet addresses, IPv4 and IPv6. It can send data to either VPCs or VPN connections. When you connect a VPC or set up a VPN on a transit gateway, it automatically uses its main route table.
Azure
Azure supports two types of peering: virtual Network peering, which connects VN within the same Azure region, and global virtual networks, which support virtual networks across multiple regions.
A VNet becomes the Hub, and other spoke Vnets connect to it via peering. The Hub VNet contains a gateway that routes traffic between different networks. These gateways support transitive connectivity between VNets and outside connectivity to on-premise.
In a hub-and-spoke network setup, gateway transit means that all the spoke networks can use one main VPN gateway located in the hub network. This way, you don’t have to put separate VPN gateways in each spoke network. The paths to reach networks connected to this gateway, or even networks at your physical location, are shared with all the connected networks through gateway transit. This makes it easier and more efficient to manage the network connections.
GCP
Google Cloud’s VPC Network Peering allows two separate Virtual Private Cloud (VPC) networks to connect privately using their internal IP addresses. This connection works regardless of whether the VPCs are in different projects or belong to other organizations. One of the main advantages of this service is how easy it is to set up, and you don’t need any extra equipment for it.
Automatically, each VPC network learns about the other’s internal pathways, which means they know how to send data to and from each other. Besides, you can share special routes, too, as long as these routes work for all the computers and devices in the network.
There are several solutions for hub-spoke in the context of the GCP platform.
Shared VPC
Shared VPC in Google Cloud Platform (GCP) lets a central team create and manage network settings like subnets, route tables, and firewall rules and then share these with other projects.
This makes it easier to manage the network and apply the same policies and security rules to many projects. Since everything is in one network, setting up network pathways requires no extra work.
However, this setup can lead to some security challenges. When projects with different security needs are in the same big network, it can take time to keep them separate. For example, a setting mistake could make a high-security project vulnerable to lower-security areas, which is a considerable risk.
To reduce these risks, it’s a good idea to use separate Shared VPCs for different environments and security levels. This way, you benefit from centralized management while keeping various projects safely isolated.
Network Connectivity Center
The Network Connectivity Center (NCC) in Google Cloud is a service that makes it easier to handle complex networks across different locations and cloud platforms.
Think of it as a central hub connecting different network parts – these could be Google’s Virtual Private Clouds (VPCs), networks in your building, or networks from other cloud services. This Hub lets you manage all these connections in one place.
With the Network Connectivity Center hub, you can do a couple of things:
- Link up VPC networks within Google Cloud, whether in the same or different Google Cloud organizations.
- Connect networks outside Google Cloud to a Google Cloud VPC using special virtual machines called router appliance VMs.
- Control how VPCs in Google Cloud connect using these router appliance VMs.
- You can use a Google Cloud VPC as a bridge to link external locations with VPN tunnels, special network connections called VLAN attachments, or router appliance VMs.
However, a single Network Connectivity Center hub can’t connect Google Cloud VPCs and external networks simultaneously. It’s either one or the other. This service helps manage complex network setups more efficiently by bringing everything under one roof.
On-premise/Hybrid connectivity
On-premise/hybrid connectivity bridges traditional on-premises data centers with cloud environments, enabling businesses to leverage the scalability and flexibility of the cloud while retaining critical data and applications on their premises. This approach supports various architectures, including VPNs for secure internet-based connections and direct connect services for private, high-speed links to the cloud.
You can connect your Virtual Private Cloud (VPC)/VN to other networks in several ways.
Site-to-Site VPN
This lets you create a secure connection between your VPC and a different network, like your office network. AWS uses a virtual private gateway or transit gateway to make this connection. This setup includes two VPN endpoints to ensure a backup in case one fails. You’ll need to set up your device (customer gateway) in the other network for this connection to work.
Point to site VPN
This service allows you and your team to access AWS resources or networks at your location securely. You set up an access point (endpoint) that users can connect to using a VPN session. This connection is protected and works from anywhere if you have an OpenVPN-based VPN client.
Network to Network
Network-to-network connections link one virtual network to another, similar to how you would connect a virtual network to your office network. Both connections use a special kind of gateway to create a secure tunnel, which uses technologies called IPsec and IKE to keep the data safe.
You can also mix network-to-network connections with connections to multiple sites.
All these network topologies are common to these cloud providers and can be achieved using specialized devices:
AWS
AWS uses Customer Gateways to represent your on-premises VPN device. For cloud connectivity, you can use either a Virtual Private Gateway (for Site-to-Site VPN connections) or a Transit Gateway (for connecting multiple VPCs and on-premises networks).
Azure
Azure requires an on-premises VPN device compatible with an Azure VPN Gateway to create Site-to-Site VPN connections. Within Azure, you create a VPN Gateway in your Virtual Network (VNet) for secure connectivity. The setup process involves configuring the on-premises VPN device and the Azure VPN Gateway to establish the VPN tunnel.
GCP
On the cloud, you have a HA VPN gateway, a Google-managed VPN gateway running on Google Cloud. On-premise, you have an External VPN Gateway.
Please note that the GCP “External VPN Gateway” concept is similar to Azure’s “VPN Device” and AWS’s “Customer Gateway” in their respective VPN solutions.
You can realize the connections in two ways: via VPN with IPsec or a direct line. The VPN is cheap and fast to implement but limited and possibly unreliable, while the private line is the expensive solution but comes with high throughput, more security, and stability.
Each provider has a direct line service.
AWS
AWS Direct Connect is a service for fast, dedicated connections. It lets an organization connect its network directly to AWS with 1 to 10 Gbps speeds. This connection is made to one of AWS’s Direct Connect locations worldwide.
The setup of these connections is done through Virtual Interfaces. These can link to a Virtual Private Cloud (VPC) in AWS or a public AWS service. Each Direct Connect connection can have up to 50 Virtual Interfaces, and you can have up to 10 connections in each AWS region. If needed, you can ask to increase these limits.
Azure
Azure ExpressRoute is a service that establishes a private and dedicated network link between your on-premises infrastructure and Microsoft Azure. This connection sidesteps the public internet, resulting in enhanced reliability, increased speeds, reduced latencies, and improved security compared to standard internet connections. It is particularly well-suited for the high-volume data transfer or for consistently accessing cloud services.
GCP
Google Cloud’s Cloud Interconnect service provides a high-speed, secure connection between an organization’s on-premises network and its Google Cloud environment. It offers two main options: Dedicated Interconnect for a private, physical connection and Partner Interconnect for connections via a service provider.
Dedicated Interconnect delivers high-bandwidth connectivity, ideal for large data transfers, while Partner Interconnect is more flexible and allows for varying connection capacities.
VPC / VN Endpoints
Virtual Private Cloud (VPC) or Virtual Network (VN) endpoints enable private connections between your VPC/VN and supported cloud services, bypassing the public internet for enhanced security and reliability.
These endpoints facilitate direct, secure access to cloud services such as storage, databases, and analytics from within your VPC/VN without requiring an Internet Gateway, VPN, or NAT device. This not only simplifies the network architecture but also reduces the exposure to internet-based threats. Endpoints are particularly valuable for compliance-sensitive applications requiring strict data sovereignty and privacy controls.
AWS
A VPC endpoint lets customers connect safely to AWS services and other endpoints that use AWS PrivateLink without needing public IP addresses. When an Amazon VPC talks to a service, the data stays within Amazon’s network.
VPC endpoints are like virtual tools in your Amazon VPC that help your setups talk to each other and services without any risk of downtime or limits on how much data can go through. There are two kinds of VPC endpoints:
Interface endpoints
Interface endpoints let you connect to certain services through AWS PrivateLink. These services can be managed by AWS, services run by other AWS customers and partners in their VPCs (endpoint services), and services from partners in the AWS Marketplace. The person or company that offers the service is called the service provider. The person or company that makes the interface endpoint to use the service is called the service consumer.
An interface endpoint is made up of one or more flexible network interfaces with a private IP address that acts as a doorway for data to pass through.
Gateway endpoints
A gateway endpoint directs traffic from your Amazon VPC to Amazon DynamoDB or Amazon S3 using specific paths, known as prefix lists, in your VPC’s route table. Unlike interface endpoints, gateway endpoints don’t work with AWS PrivateLink.
Your VPC instances don’t need public IP addresses to talk to these endpoints because interface endpoints use private IP addresses from within your VPC and can be reached from inside your Amazon VPC by using these prefix lists in the VPC’s route table.
This setup helps connect to AWS services without going through the public internet.
Azure
Azure Virtual Network (VNet) service endpoints help safely connect your network directly to Azure services without going through the internet by using the Azure backbone network.
With these endpoints, you can ensure that only your network can access certain Azure services. It means you don’t need a public IP address on your network to talk to Azure services; you can use private IP addresses instead.
Turning on service endpoints in your network allows you to set up rules that let only your network access specific Azure services, making your setup safer. It prevents the public internet from accessing these services, allowing only your network to reach them.
GCP
In Google Cloud Platform (GCP), Private Google Access and VPC Service Controls are alternatives to Azure Service Endpoints or AWS Interface Endpoints.
Private Google Access
Private Google Access allows VM instances in a VPC (Virtual Private Cloud) that do not have external internet access (no external IP addresses) to reach Google APIs and services. This is similar to how Azure Service Endpoints and AWS Interface Endpoints work, enabling private connections to services.
With Private Google Access, services within your VPC can communicate with Google services like Google Cloud Storage, BigQuery, etc., through Google’s internal network, eliminating the need to expose these services to the public internet.
VPC Service Controls
VPC Service Controls enhance the security perimeter for API-based services on the Google Cloud Platform. While it’s more about controlling and securing access to services rather than enabling private connectivity, it complements Private Google Access by ensuring that only authorized applications and resources within your defined security perimeter can access specific GCP resources.
Load Balancers
Load Balancers help spread incoming traffic over virtual machines, containers, FAAS, or IP addresses. They ensure the service can handle all the traffic simultaneously and check if the end services work correctly. The load balancers will only send traffic to the healthy endpoints.
There are two main types of load balancers: those that work on Layer Seven ( Application layer) and Layer 4 (Transport One). The first one routes traffic based on the content of the request and can make some “intelligent” decisions. The last one is used when extreme performance and low latency are required.
AWS
Application Load Balancer
The layer seventh load balancer from AWS is called the Application Load Balancer. It works with the application layer, looks at the rules set for incoming requests, and decides where to send them based on these rules.
You can set it up so different requests go to different places based on what’s being asked for – traffic can be distributed to virtual machines (EC2), IP addresses, or Lambda functions.
Network Load Balancer
The layer four load balancer is called the Network Load Balancer. This device can handle millions of requests per second by making routing decisions based on IP address and port number without inspecting the packets’ contents.
When a Network Load Balancer (NLB) gets a request to connect, it picks a target (like a server) based on its settings and tries to start a conversation with that target using the TCP protocol, following the rules you set up.
If you turn on a specific area (Availability Zone) for the NLB, it sets up a load balancer node in that area. This node will send internet traffic only to the targets in the exact location. However, if you allow cross-zone load balancing, the node can send traffic to targets in any area you’ve turned on.
Gateway Load Balancer
The Gateway Load Balancer is focused on the network layer (Layer 3) and is particularly designed to simplify the deployment, scaling, and management of third-party virtual appliances such as firewalls, intrusion detection and prevention systems.
By operating at the IP packet level, the GLB can direct traffic to these appliances for inspection and filtering without being constrained by application protocols. A consistent flow hash (based on a five-tuple or three-tuple hash) is used to ensure that flows are sent to the same appliance for the duration of the session.
Azure
Azure load Balancer
The Azure Load Balancer is Microsoft’s solution for layer four load balancing. It can be public or internal and will spread incoming traffic to different places, such as virtual computers, based on the rules you set up. It will also help ensure the endpoints are working correctly by performing health checks.
There are three types of Azure Load Balancers: Basic, Standard, and Gateway. Each type is made for different needs and varies in how much it can handle, what features it offers, and how much it costs.
Standard Load Balancer
Standard Load Balancers are recommended when ultra-low latency and high performance are required. They direct traffic within a single area and across different regions, ensuring a reliable connection using multiple backup locations. They can work with any virtual computer or groups of virtual computers within one network system and support different types of internet communication, including TCP, HTTP, or HTTPS.
Basic Load Balancer
This solution will be retired in 2025, but for the moment, it can be used for small-scale applications that do not require high availability or redundancy.
They work via TCP and UDP, are NIC-based, and can distribute traffic to virtual machines or scale sets inside one availability zone.
Gateway Load Balancer
The Gateway Load Balancer is designed for scenarios requiring fast performance and high reliability. It works with third-party network virtual appliances, such as firewalls or systems, to detect security threats.
Connecting the Gateway Load Balancer to your public endpoint is straightforward, needing only one selection. This setup allows you to integrate various security and monitoring functions, including firewall protections, advanced packet analytics, systems for spotting and stopping intrusions, traffic mirroring for analysis, DDoS attack protection, and any custom network tools you might need.
It employs a “bump-in-the-wire” technology, ensuring that all traffic heading to a public endpoint is routed through the necessary appliance for security or analysis before reaching your application.
Azure Application Gateway
The Azure Application Gateway is a web traffic management solution that ensures secure, scalable, and highly available web applications. Operating at the OSI model’s application layer (Layer 7), it offers advanced routing capabilities, enabling it to make intelligent decisions about where to send incoming web traffic based on the request’s content.
It can be particularly useful for managing complex multi-site or multi-region web application traffic. The Application Gateway supports autoscaling, SSL termination, and session affinity, which helps manage large volumes of traffic, secure communications, and maintain user session state across web servers.
It can work with other Azure services, such as Virtual Machines, Virtual Machine Scale Sets, and Azure Kubernetes Service, making it a flexible choice for various application scenarios.
GCP
Application Load Balancer
An external Application Load Balancer is a proxy-based Layer 7 load balancer that helps you distribute traffic between GCP services like Compute Engine or Google Kubernetes.
You can set up this load balancer in a few different ways:
- Global External Application Load Balancer: This works worldwide and is run on Google’s system. It uses an open-source Envoy proxy to copy traffic for testing, divide traffic based on specific rules, or change message data.
- Classic Application Load Balancer: If you choose the Premium option, it works globally, but you can make it work only in your region with the Standard option.
- Regional external Application Load Balancer: This one works within a specific region and uses Envoy. Like the global version, it offers advanced features for handling traffic sophisticatedly.
Network Load Balancers
The Network Load Balancers are regional and work at layer 4 to help spread out incoming internet traffic across multiple servers (like instance groups or network endpoint groups) located in the same area as the load balancer.
These can be in different virtual networks but must be in the same location and project. They can handle traffic from anywhere on the internet, including from Google Cloud virtual machines with either their external IP addresses or access to the internet through Google Cloud’s NAT services.
You have two options: Network Load Balancer (TCP/SSL) and Network Load Balancer (UDP/Multiple protocols). As the name suggests, the main difference is the protocol they operate on. Also, the UDP one is a single region only.
At this moment, there is no service similar to AWS or Azure Gateway load balancer. To mimic that functionality in Google Cloud, you would typically set up a network architecture that routes traffic through these third-party virtual appliances using Google Cloud’s load balancing and network management services.
This setup requires a more manual configuration approach. It combines Google Cloud services and marketplace solutions to create a traffic inspection and filtering pathway.
AWS, Azure, and GCP offer robust load-balancing solutions to distribute incoming network traffic across multiple servers or services, enhancing application availability, reliability, and scalability. These platforms provide Layer 4 for routing decisions based on IP address and port and Layer 7 for routing based on content type, URL, or other HTTP header information.
Each cloud provider supports auto-scaling, health checks, and SSL termination to ensure optimal performance and security. Despite platform-specific features and terminologies, the core functionalities of distributing traffic, ensuring application uptime, and providing secure connections are common across these services.
You can read the first part of this article here: https://sg12.cloud/aws-vs-azure-vs-google-cloud-platform-networking-concepts-and-services/