Google Cloud Hybrid Networking Patterns — Part 1
by Jasbir Singh, Staff Consulting Architect, Public Cloud, Rackspace Technology
Introduction
Enterprises won’t migrate their entire on-premises workloads to cloud in one go. It makes more sense to migrate portions of workloads at a time to cloud, wherever the maximum benefits of migrating to cloud migration can be reaped. As such, you’ll still need a way for your on-premises systems to communicate with your newly deployed cloud resources.
There are multiple networking patterns that can be leveraged for designing hybrid connectivity on Google Cloud. The hybrid connectivity design depends on factors such as:
- The number of VPC networks used for the workloads
- The need for layer 7 traffic inspection using network virtual appliances (NVAs)
- Access to Google-managed services that use Private Services Access
This will be a multi-part blog series where I will cover different network topologies available on Google Cloud, catering to various hybrid connectivity use cases. This first blog will focus on hybrid connectivity to a single or shared VPC on Google Cloud. In subsequent blogs, I will discuss hybrid connectivity to multiple VPC (or shared VPC) networks on Google Cloud and hybrid connectivity using appliances.
When you have a large number of workload VPC networks, you might need a hub-and-spoke architecture that supports a large number of VPC networks. This involves connecting multiple workload VPC networks to a transit hub VPC that has connectivity to on-premises environments and other clouds. I will cover Hub-and-Spoke with VPC Peering to Spokes, Hub-and-Spoke with HA-VPN to Spokes, and hybrid connectivity using appliances and VPC Peering to Spokes.
Hybrid connectivity to single VPC (or shared VPC)
1. Interconnect to On-Premises
A. Cloud Interconnect: Set up a Dedicated Interconnect or Partner Interconnect to Google Cloud. Connect to two Edge Availability Domains (EAD) in the same Metro to achieve a 99.99% SLA. You can connect your Cloud Interconnects to multiple regions within the Shared VPC.
B. VLAN Attachment: A VLAN attachment connects your interconnect in a Google point of presence (PoP) to a cloud router in a specified Google Cloud region.
C. Cloud Router: A Cloud Router exchanges dynamic (BGP) routes between your VPC networks and on-premises routers. You can configure dynamic routing between your on-premises routers and a Cloud Router in a particular region. Each Cloud Router is implemented by two software tasks that provide two interfaces for high availability. Configure BGP routing to each of the Cloud Router’s interfaces.
D. VPC Global Dynamic Routing: Configure global dynamic routing in the Shared VPC to allow the exchange of dynamic routes between all regions.
2. HA-VPN to On-Premises
A. Cloud HA-VPN: The Cloud HA-VPN gateway establishes IPsec tunnels to the on-premises VPN gateway over the internet. HA-VPN offers a 99.99% SLA. You can have multiple HA-VPN tunnels into different regions within the Shared VPC.
B. Cloud Routers: Configure dynamic routing between the on-premises routers and a Cloud Router in each region. Each Cloud Router is implemented by two software tasks that provide two interfaces for high availability. Configure BGP routing to each of the Cloud Router’s interfaces.
C. VPC global dynamic routing: Configure global dynamic routing in the Shared VPC to allow the exchange of dynamic routes between all regions.
3. DNS
Overview
In a hybrid environment, DNS resolution can be performed either in Google Cloud or on-premises. Let’s consider a use case where on-premises DNS servers are authoritative for on-premises DNS zones, and Cloud DNS is authoritative for Google Cloud zones.
A. On-premises DNS: Configure your on-premises DNS server to be authoritative for on-premises DNS zones. Set up DNS forwarding (for Google Cloud DNS names) by targeting the Cloud DNS inbound forwarding IP address, which is created via the Inbound Server Policy configuration in the Shared VPC. This allows the on-premises network to resolve Google Cloud DNS names.
B. Host Project (Shared VPC) — DNS Egress Proxy: Advertise the Google DNS Egress Proxy range 35.199.192.0/19 to the on-premises network via the Cloud Routers. Outbound DNS requests from Google to on-premises are sourced from this IP address range.
C. Host Project (Shared VPC) — Cloud DNS:
- Configure an Inbound Server Policy for inbound DNS requests from on-premises.
- Configure DNS forwarding zone (for on-premises DNS zones) targeting the on-premises DNS resolvers.
4. Private Service Connect (PSC) for Google APIs (access to all supported APIs and services)
Overview
You can use Private Service Connect (PSC) to access all supported Google APIs and services from Google Compute Engine (GCE) hosts and on-premises hosts using the internal IP address of a PSC endpoint in the Shared VPC. Let’s consider PSC access to a service in Service Project 4 via the Shared VPC.
Create a PSC Endpoint
A. Choose a PSC endpoint address (e.g., 10.1.1.1) and create a PSC endpoint in the Shared VPC with a target of “all-apis,” which gives access to all supported Google APIs and services. Service Directory automatically creates a DNS record (with the DNS name of p.googleapis.com) linked to the PSC endpoint IP address.
Access from GCE Hosts
The GCE-4 host in Service Project 4 can access all supported Google APIs and services via the PSC endpoint in the Shared VPC.
B. Enable Private Google Access on all subnets with compute instances that require access to Google APIs via PSC.
C. If your GCE clients can use custom DNS names (e.g., storage-xyz.p.googlepapis.com), you can use the auto-created p.googleapis.com DNS name.
D. If your GCE clients cannot use custom DNS names, you can create Cloud DNS records using the default DNS names (e.g., storage.googleapis.com).
Access from on-premises hosts
On-premises hosts can access all supported Google APIs and services via the PSC endpoint in the Shared VPC.
E. Advertise the PSC endpoint address to the on-premises network.
F. If your on-premises clients can use custom DNS names (e.g., storage-xyz.p.googlepapis.com), you can create A records mapping the custom DNS names to the PSC endpoint address.
G. If your on-premises clients cannot use custom DNS names, you can create A records mapping the default DNS names (e.g., storage.googleapis.com) to the PSC endpoint address.
Private Service Connect (PSC) for Google APIs (access to APIs and services supported on VPC Service Control)
Overview
You can use Private Service Connect (PSC) to access all supported secure Google APIs and services from Google Compute Engine (GCE) hosts and on-premises hosts using the internal IP address of a PSC endpoint in the Shared VPC. Let’s consider PSC access to a service in Service Project 4 via the Shared VPC.
Create a PSC Endpoint
A. Choose a PSC endpoint address (e.g., 10.1.1.1) and create a PSC endpoint in the Shared VPC with a target of “vpc-sc,” which gives access to Google APIs and services that are supported under VPC Service Control. Service Directory automatically creates a DNS record (with DNS name of p.googleapis.com) linked to the PSC endpoint IP address.
Access from GCE hosts
The GCE-4 host in Service Project 4 can access Google APIs and services supported by VPC Service Control via the PSC endpoint in the Shared VPC.
B. Enable Private Google Access on all subnets with compute instances that require access to Google APIs via PSC.
C. If your GCE clients can use custom DNS names (e.g., storage-xyz.p.googlepapis.com), you can use the auto-created p.googleapis.com DNS name.
D. If your GCE clients cannot use custom DNS names, you can create Cloud DNS records using the default DNS names (e.g., storage.googleapis.com).
Access from on-premises hosts
On-premises hosts can access all secure Google APIs and services via the PSC endpoint in the Shared VPC.
E. Advertise the PSC endpoint address to the on-premises network.
F. If your on-premises clients can use custom DNS names (e.g., storage-xyz.p.googlepapis.com), you can create A records mapping the custom DNS names to the PSC endpoint address.
G. If your on-premises clients cannot use custom DNS names, you can create A records mapping the default DNS names (e.g., storage.googleapis.com) to the PSC endpoint address.
6. VPC Service Control
Overview
VPC Service Control uses ingress and egress rules to control access to and from a perimeter. The rules specify the direction of allowed access to and from different identities and resources.
Let’s consider a specific use case where we require access to a protected service in Service Project 4 via the Shared VPC. Below is a description of VPC service control actions for our specific scenario.
A. Perimeter — service project: The perimeter contains a service project (Service Project 4) and includes Google APIs and services to be protected in the service project.
B. API access from GCE hosts: A GCE client can access secured APIs through a PSC endpoint in the Shared VPC. Let’s consider the perimeter around Service Project 4. The network interface of the compute instance GCE-4 is in the Shared VPC of Host Project. API calls from GCE-4 instance to a service (e.g., storage.googleapis.com) in Service Project 4 appear to originate from Host Project, where the instance interface and PSC endpoint are located.
C. Ingress rule — host project into perimeter: Configure an ingress rule that allows Google API calls from the host project to the protected services in the Service Project 4 perimeter. This rule allows API calls from the GCE instances (e.g., GCE-4) into the perimeter.
Here's a detailed configuration:
1. Access Perimeter Firewall Policy:
- Navigate to the Service Project 4 in the Google Cloud Console.
- Go to VPC network > Firewall policies.
- Click on the default firewall policy.
2. Create a New Ingress Rule:
- Click the Add rule button.
- Set the Action to Allow.
- Select TCP as the Protocol.
- For the Source, choose IP range.
- Enter the IP range of your GCE instances in Host Project. You can find this information in the GCE instance details. For example, if the IP range is 10.0.0.0/8, enter it here.
- For the Destination, select IP range and enter the IP range of the protected services in Service Project 4. This might be the internal IP range of your VPC network or specific IP addresses of the services.
- In the Target section, select All.
- For the Priority, choose a suitable value (e.g., 1000).
- Click Add rule.
3. Verify the Rule:
- Once the rule is created, verify its details to ensure it's configured correctly.
- Test the rule by making an API call from a GCE instance in Host Project to a protected service in Service Project 4.
- Additional Considerations:
- If you have multiple GCE instances or want to allow traffic from specific subnets, adjust the source IP range accordingly.
- For more granular control, consider using source tags or firewall rules based on specific ports or protocols.
- If you need to allow traffic from other projects or external IP addresses, modify the source accordingly.
4. API access for on-premises hosts
On-premises hosts can access secured APIs in Service Project 4 via the PSC endpoint in Shared VPC. API calls from on-premises to services in Service Project 4 appear to originate from Host Project, where the Interconnect and the PSC endpoint are located. The ingress rule (in step 3) allows API calls from on-premises to Service Project 4 perimeter via Host Project. The below diagram is an example.
As enterprises migrate workloads to the cloud, they must understand the various hybrid networking patterns to ensure seamless connectivity between on-premises and cloud environments. This blog has outlined key configurations that will help you achieve this goal. In future posts, we'll dive deeper into advanced patterns to help you optimize your hybrid cloud strategy on Google Cloud. Stay tuned for the next installment as we explore multi-VPC and appliance-based connectivity solutions.
Read Part 2 of this series!
Recent Posts
Google Cloud Hybrid Networking Patterns — Part 1
October 17th, 2024
Google Cloud Hybrid Networking Patterns — Part 3
October 17th, 2024
Google Cloud Hybrid Networking Patterns — Part 2
October 17th, 2024
How Rackspace Leverages AWS Systems Manager
October 9th, 2024
Windows Server preventing time sync with Rackspace NTP
October 7th, 2024