Google Cloud Hybrid Networking Patterns — Part 3
by Jasbir Singh, Staff Consulting Architect, Public Cloud, Rackspace Technology
Introduction
In Part 1 and Part 2 of this series, we explored the foundational networking patterns for hybrid connectivity between on-premises systems and Google Cloud. Part 1 focused on connecting to a single or shared VPC, while Part 2 expanded into hybrid connectivity across multiple VPC networks. In this final installment, we’ll dive into more advanced patterns, specifically hybrid connectivity using appliances in Shared VPC networks within a hub architecture on Google Cloud.
Hybrid connectivity using appliances (Shared VPC networks in Hub)
1. Interconnect to On-Premises
Consider a use case where workloads are organized into separate Shared VPC networks for Production (Prod) and Non-Production (Non-Prod). The Interconnect (or HA-VPN) from on-premises (or other clouds) terminates directly into the External VPC.
This pattern provides hybrid connectivity to:
- IaaS resources within the workload Shared VPC networks
- Google APIs and services (e.g., storage.googleapis.com, *.run.app) in the workload projects
- Google Cloud managed services using Private Services Access
Use this pattern when:
- You have multiple workload-shared VPC networks that require layer 7 inspection for traffic between VPC networks and on-premises. While this example shows two Shared VPC networks (Prod and Non-Prod), you can support up to seven VPC networks. The workload VPC networks communicate via network virtual appliances (NVA), which apply network security policies to allow or deny traffic.
- You have multiple workload-shared VPC networks that must share a common connection to on-premises or other clouds. In this example, connectivity to on-premises and other clouds is routed through the NVA and the External VPC.
- You need network connectivity to managed services using Private Services Access within the workload Shared VPC networks. This pattern allows access to managed services from on-premises and any workload Shared VPC.
Scaling out (number of workload shared VPCs)
The maximum number of workload-shared VPC networks is seven, limited by the maximum number of network interfaces per instance. If an additional interface is required for management traffic to the NVA, only six workload-shared VPC networks can be deployed.
- Cloud Interconnect: Set up a Dedicated Interconnect or Partner Interconnect to Google Cloud. Connect to two Edge Availability Domains (EAD) within the same Metro area to achieve a 99.99% SLA. Your Cloud Interconnects can connect to multiple regions in the same Shared VPC.
- VLAN Attachment: A VLAN attachment connects your interconnect in a Google point of presence (PoP) to a cloud router in a specified region.
- Cloud Router: A Cloud Router exchanges dynamic routes between your VPC networks and on-premises routers. You can configure dynamic routing between your on-premises routers and a Cloud Router in a particular region. Each Cloud Router is implemented by two software tasks that provide two interfaces for high availability. Configure BGP routing to each Cloud Router interface.
- VPC Global Dynamic RoutingL Configure global dynamic routing in the Shared VPC to allow the exchange of dynamic routes between all regions.
- Cloud HA-VPN: The Cloud HA-VPN gateway establishes IPsec tunnels to the on-premises VPN gateway over the Internet. HA-VPN offers a 99.99% SLA. You can configure multiple HA-VPN tunnels into different regions in the External VPC.
- Cloud Routers: Configure dynamic routing between the on-premises routers and a Cloud Router in each region. Each Cloud Router provides two interfaces for high availability. Configure BGP routing to both Cloud Router interfaces.
- VPC Global Dynamic Routing: Configure global dynamic routing in the External VPC to allow the exchange of dynamic routes between all regions.
3. Cloud DNS Forwarding and Peering
Overview
In a hybrid environment, DNS resolution can be performed in Google Cloud or on-premises. Let’s consider a use case where on-premises DNS servers are authoritative for on-premises DNS zones, while Cloud DNS is authoritative for Google Cloud zones.
- On-Premises DNS: Configure your on-premises DNS server to be authoritative for on-premises DNS zones. Set up DNS forwarding for Google Cloud DNS names by targeting the Cloud DNS inbound forwarding IP address, created via the Inbound Server Policy in the External VPC. This setup allows the on-premises network to resolve Google Cloud DNS names.
- External VPC — DNS Egress Proxy : Advertise the Google DNS egress proxy range (35.199.192.0/19) to the on-premises network via the Cloud Routers. Outbound DNS requests from Google Cloud to on-premises originate from this IP address range.
- External VPC — Cloud DNS : a) Configure an Inbound Server Policy for DNS requests from on-premises. b) Set up a Cloud DNS Forwarding Zone for on-premises DNS names, targeting on-premises DNS resolvers.
- Hub Host Project — Cloud DNS : a) Configure a DNS Peering Zone for on-premises DNS names, targeting the External VPC as the peer network. This configuration allows Non-Prod resources to resolve on-premises DNS names. b) Set up Non-Prod DNS Private Zones in the Hub Host Project and attach Non-Prod Shared VPC, Prod Shared VPC, and External VPC to the zone. This allows all hosts (on-premises and in all service projects) to resolve Non-Prod DNS names.
- Hub Host Project — Cloud DNS : a) Configure a DNS Peering Zone for on-premises DNS names, setting the External VPC as the peer network. This allows Prod resources to resolve on-premises DNS names. b) Set up Prod DNS Private Zones in the Hub Host Project and attach Prod Shared VPC, Non-Prod Shared VPC, and External VPC to the zone. This allows all hosts (on-premises and in all service projects) to resolve Prod DNS names.
4. Private Service Connect (PSC) for Google APIs (Access to All Supported APIs and Services)
Overview
You can use Private Service Connect (PSC) to access all supported Google APIs and services from Google Compute Engine (GCE) hosts and on-premises. Let’s consider PSC access to a service in Service Project 4 via the External VPC and Prod Shared VPC.
Creating PSC Endpoints
- Choose a PSC endpoint address (e.g., 10.0.0.1) and create a PSC endpoint in the External VPC with the target “all-apis,” which provides access to all supported Google APIs and services.
- Choose a PSC endpoint address (e.g., 10.2.2.2) and create a PSC endpoint in the Prod Shared VPC with the target “all-apis.” The Service Directory automatically creates a DNS record (p.googleapis.com) linked to each PSC endpoint IP address.
Access from GCE Hosts
The GCE-4 host in Service Project 4 can access all supported Google APIs via the PSC endpoint (10.2.2.2) in the Prod Shared VPC.3.
- Enable Private Google Access on all subnets with compute instances that require access to Google APIs via PSC.
- If your GCE clients can use custom DNS names (e.g., storage-xyz.p.googlepapis.com), use the auto-created p.googleapis.com DNS name.
- If your GCE clients cannot use custom DNS names, create Cloud DNS records using the default DNS names (e.g., storage.googleapis.com).
Access from On-premises Hosts
On-premises hosts can access all supported Google APIs via the PSC endpoint in the External VPC.
- Advertise the PSC endpoint address to the on-premises network.
- If on-premises clients can use custom DNS names, create A records mapping the custom DNS names to the PSC endpoint address.
- If on-premises clients cannot use custom DNS names, create A records mapping the default DNS names to the PSC endpoint address.
5. Private Service Connect (PSC) for Google APIs (Access to APIs and Services Supported on VPC Service Control)
Overview
You can use Private Service Connect (PSC) to access all supported secure Google APIs and services from Google Compute Engine hosts and on-premises. Let’s consider PSC access to a service in Service Project 4 via the External VPC and Prod Shared VPC.
Creating PSC Endpoints
- Choose a PSC endpoint address (e.g., 10.0.0.1) and create a PSC endpoint in the External VPC with the target “vpc-sc,” which provides access to all supported secure Google APIs and services.
- Choose a PSC endpoint address (e.g., 10.2.2.2) and create a PSC endpoint in the Prod Shared VPC with the target “vpc-sc.” The Service Directory automatically creates a DNS record (p.googleapis.com) linked to each PSC endpoint IP address.
Access from GCE Hosts
The GCE-4 host in Service Project 4 can access all supported secure Google APIs via the PSC endpoint (10.2.2.2) in the Prod Shared VPC.
- Enable Private Google Access on all subnets with compute instances that require access to Google APIs via PSC.
- If your GCE clients can use custom DNS names, use the auto-created p.googleapis.com DNS name.
- If your GCE clients cannot use custom DNS names, create Cloud DNS records using the default DNS names.
Access from On-Premises Hosts
- On-premises hosts can access all supported secure Google APIs via the PSC endpoint in the External VPC.
- Advertise the PSC endpoint address to the on-premises network.
- If on-premises clients can use custom DNS names, create A records mapping the custom DNS names to the PSC endpoint address.
- If on-premises clients cannot use custom DNS names, create A records mapping the default DNS names to the PSC endpoint address.55
6. VPC Service Control
Overview
VPC Service Control uses ingress and egress rules to control access to and from a perimeter. These rules specify the direction of allowed access to and from different identities and resources.
Consider a use case where access is required to a protected service in Service Project 4 via the External VPC and Prod Shared VPC. This is a simple example and not exhaustive.
VPC Service Control Actions:
- Service Project Perimeter: The perimeter contains Service Project 4 and includes Google APIs and services to be protected in the service project.
- API Access from GCE Hosts: A GCE client can access secured APIs through a PSC endpoint in a Shared VPC. The network interface of the GCE-4 instance is in the Prod Shared VPC of the Hub Host Project. API calls from the GCE-4 instance to a service (e.g., storage.googleapis.com) in Service Project 4 appear to originate from the Hub Host Project, where the instance interface and PSC endpoint are located.
- Ingress Rule — Hub Host Project into Perimeter: Configure an ingress rule that allows Google API calls from the Hub Host Project to the protected services in the Service Project 4 perimeter. This rule allows API calls from GCE instances (e.g., GCE-4) into the perimeter.
- API Access for On-Premises Hosts: On-premises hosts can access secured APIs in Service Project 4 via the PSC endpoint in the External VPC. API calls from on-premises to services in Service Project 4 appear to originate from the Hub Host Project, where the Interconnect and PSC endpoint are located. The ingress rule (from step 3) allows API calls from on-premises to the Service Project 4 perimeter via the Hub Host Project.
Unlocking the Full Potential of Hybrid Cloud
Hybrid connectivity is the key to driving both agility and innovation in the cloud. Throughout this series, we’ve explored a range of networking patterns to help you connect on-premises systems with Google Cloud seamlessly, from foundational VPC setups to advanced appliance-based architectures.
By choosing the right patterns for your workloads, you can achieve secure, efficient, and scalable cloud operations. We hope this series has provided the clarity and guidance you need to confidently design your hybrid networking solutions. If you have any questions or need tailored support, our team at Rackspace Technology is here to help you unlock the full potential of your cloud infrastructure.
Recent Posts
Create Custom Chatbot with Azure OpenAI and Azure AI Search
December 10th, 2024
Upgrade Palo Alto Firewall and GlobalProtect for November 2024 CVE
November 26th, 2024
Ready for Lift Off: The Community-Driven Future of Runway
November 20th, 2024
Google Cloud Hybrid Networking Patterns — Part 1
October 17th, 2024
Google Cloud Hybrid Networking Patterns — Part 3
October 17th, 2024