Deploy Palo Alto Firewall on Google Cloud
By Rishabh Dogra, Staff Consulting Architect, Rackspace Technology
An organization came to its partners, Rackspace Technology® and Google Cloud, with a request to deploy Palo Alto as a Palo Alto Next-Generation Firewall (NGFW). It was to be a gateway for traffic entering and exiting the cloud environment with all traffic going through Palo Alto, per Google Cloud region.
The Rackspace team and Google decided to use a virtual private cloud (VPC) network peering deployment model, that let a single virtual machine use multiple network interface cards to connect to different VPCs directly. This approach helped the client’s team reduce the number of Palo Alto deployments required, because traffic from multiple clients can be filtered through the same scalable Palo Alto deployment per region.
The deployment of Palo Alto instances per Google Cloud region, which is actively used, is important because it helps to minimize latency.
This task involved the Palo Alto deployment team, which provided automation scripts for its required software and licenses. They were seamlessly integrated with the reusable infrastructure deployment using Terraform. This gave the client the liberty to expand similar Palo Alto infrastructure to future regions.
1. Introduction
1.1 Background
With the increasing adoption of cloud services, ensuring network security has become a critical concern for enterprises. Traditional security models often fall short in addressing the needs of cloud-native environments. This article outlines the deployment of Palo Alto NGFW on Google Cloud to secure the organization's cloud environment.
1.2 Purpose
The purpose of this article to detail the deployment strategy of Palo Alto's NGFW on Google Cloud, addressing the challenges and solutions provided during the implementation.
2. Problem statement
2.1 Cloud-only design with no on-premises communication
The design for the cloud foundation required a completely cloud-native setup with no communication to on-premises systems. Cloud-native workloads needed to communicate securely with other services over the public internet, with provisions for external availability.
2.2 Network management models
The deployment required different network management models based on client usage modes, including:
- Mode 1: Centrally managed by the CloudHub team.
- Mode 2: Dedicated VPC managed by the client with guardrails at the folder and project level.
- Mode 3: Like Mode 2, but with a higher degree of client control.
2.3 Security and compliance requirements
The deployment had to deliver:
- All resources were deployed using Terraform.
- Networking was disconnected with applications acting as boundaries.
- Firewalls were deployed across all networks and managed centrally for whitelisting and blacklisting.
- Firewalls supported URL-based filtering and packet inspection.
- Network VPC flow logs were synchronized to a centralized location for SIEM export.
- Public interfaces facilitated secure intra-application communication when necessary.
2.4 Detailed problem description
Enterprises moving to cloud platforms like Google Cloud face several challenges, including:
- Cybersecurity: Addressing new attack vectors, helping to ensure consistent policy enforcement and deploying advanced threat protection.
- Network management and performance: Managing inter- and intra-VPC traffic and helping to ensure low latency and high bandwidth connections.
- Scalability and flexibility: Security solutions must scale with the enterprise's growth and adapt to changing network architectures.
- Operational efficiency: Simplifying security management and balancing cost with performance.
2.5 Evidence and data supporting the problem
- Increasing cloud adoption: Worldwide end-user spending on public cloud services is forecasted to grow 20.4% to $675.4 billion in 2024, up from $561 billion in 2023, according to Gartner. Generative AI and application modernization are driving this growth.
- Rising cybersecurity threats: IBM's 2023 report highlighted an increase in cyberattacks, with the average data breach cost rising to $4.45 million. Public cloud environments are increasingly targeted, necessitating advanced security measures.
- Complexity in policy enforcement: A McAfee study revealed that 81% of organizations struggle with enforcing consistent security policies across hybrid environments, emphasizing the need for integrated solutions like the one proposed by Palo Alto and Google Cloud.
- Network performance and management: Latency and bandwidth constraints were identified as primary barriers to cloud adoption in a survey by IDG, with 50% of respondents citing these issues.
Case studies and examples
- High-profile breaches: High-profile data breaches, such as the Capital One breach in 2019, where misconfigured firewalls in a cloud environment led to the exposure of sensitive data, demonstrate the importance of robust cloud security configurations.
- Industry reports: Sources, like IDC, Gartner and Forrester, frequently highlight the challenges organizations face in securing cloud environments, managing performance and enforcing consistent policies. The reports provide empirical data supporting the need for integrated solutions, like the Palo Alto and Google Cloud.
3. Solution overview
The solution involves using the VPC Network Peering model on Google Cloud for a Palo Alto NGFW deployment. This model allows a single virtual machine (VM) to use multiple network interface cards to connect to different VPCs, reducing the number of deployments required per region and maintaining low latency.
1. Key benefits and features
- Improved network security: Isolated traffic and centralized security policies across multiple VPCs
- Enhanced performance: Low-latency, high-bandwidth connections without public internet traversal
- Cost efficiency: No data transfer charges within regions and simplified network architecture
- Scalability and flexibility: Easy expansion of network architecture and flexible deployment models
- Operational efficiency: Simplified management of peered VPCs and integrated monitoring
2. Challenges with the solution
- Complexity in setup and management: Initial setup is complex, requiring significant operational overhead.
- Cost considerations: Licensing and operational costs for Palo Alto firewalls can be substantial.
- Limited peering connections: Google Cloud imposes limits on the number of VPC peering connections.
- Latency across regions: Increased latency when VPCs are in different regions.
4. Detailed solution description
The Palo Alto Networks’ VM-series NGFW is a virtualized firewall designed to provide advanced security capabilities in cloud environments, like Google Cloud. It offers features, such as application visibility and control, threat prevention and secure network segmentation, which are critical for protecting workloads deployed in Google Cloud.
Core components of the solution
The setup of Palo Alto Networks’ VM-Series NGFW in Google Cloud involves several key components and considerations, which help ensure robust security and high availability.
1. VPC architecture
The foundation of the network architecture in Google Cloud is the VPC. VPCs are segmented into different environments, such as trust VPC, untrust VPC, management VPC and shared VPCs. Each of these plays a distinct role in the overall security architecture, including:
- Trust VPC: Hosts trusted applications and workloads. It is where the internal load balancers and Palo Alto’s VMs operate.
- Untrust VPC: Manages untrusted traffic from the internet, using external load balancers that direct traffic to the Palo Alto firewalls for inspection.
- Management VPC: Hosts the management components, such as Panorama (centralized management for Palo Alto firewalls) and Redis for session resiliency.
- Shared VPCs: Facilitates shared resources and services, including inter-project communication, while maintaining workload isolation.
2. Ingress and egress traffic handling
- Ingress traffic:
- Traffic originating from the internet is first received by an external load balancer in the untrust VPC.
- This traffic is then forwarded to the Palo Alto VM-series NGFWs for security inspection.
- After inspection, the traffic is routed to the appropriate application hosted in the trust VPC or other connected environments.
- Egress traffic:
- Outbound traffic from applications in the trust VPC is routed through an internal load balancer.
- The traffic is inspected by the Palo Alto VMs, helping to ensure that no malicious content or unauthorized data leaves the environment.
- Post inspection, the traffic is forwarded to the cloud NAT (network address translation) for secure outbound connections
3. Session resiliency
Session resiliency is a critical feature in the Palo Alto VM-series setup to help ensure that ongoing traffic sessions are not disrupted during firewall failures, for example:
- Redis cache for session management:
- A Google Cloud Memorystore for a Redis instance is deployed in the management VPC to store session information.
- In the event of a firewall failure, the Google Cloud network load balancer detects the unhealthy VM and redirects the traffic to a healthy firewall in the cluster.
- The healthy firewall retrieves the session information from Redis, helping to ensure that ongoing sessions continue uninterrupted.
4. High availability and scalability
To help ensure high availability, the Palo Alto VM-Series NGFWs are deployed in a horizontally scalable architecture, including:
- Multiple firewall instances:
- Multiple instances of the Palo Alto NGFW are deployed across different availability zones within a region.
- The firewalls are set up in a load-balanced configuration using Google Cloud’s network load balancers.
- Automatic failover:
- The network load balancer automatically handles the failover by detecting and rerouting traffic away from any failed or unhealthy instances.
- Scale-out:
- The setup supports horizontal scaling, which allows more firewall instances to be added as the traffic load increases, without disrupting existing services.
5. Centralized management
Centralized management of the Palo Alto firewalls is achieved using Panorama in the following ways:
- Panorama: This is Palo Alto Networks' centralized management solution, which provides a single interface for managing all firewall instances, including policy management, updates and monitoring.
- Role in the architecture:
- Deployed in the management VPC, Panorama simplifies operations by allowing administrators to manage security policies across multiple instances and environments consistently.
6. Integration with Google Cloud services
The Palo Alto VM-series NGFW setup integrates with several Google Cloud services to enhance security and operational efficiency, including:
- Private service access (PSA): To securely access Google Cloud-managed services, like Cloud SQL, Memorystore and others, private service access (PSA) is configured. This involves creating VPC peering connections to allow safe, private communication between the VPCs and Google Cloud-managed services.
- Private Google Cloud access: VM instances with only internal IP addresses can access Google Cloud APIs and services using Private Google Access. This setup helps ensure that traffic to Google Cloud services does not leave the Google Cloud network, enhancing security and reducing latency.
- Cloud NAT: Cloud NAT is used to enable internet access for VM instances without exposing their internal IP addresses, thus, preserving network security.
7. Security policies and threat prevention
The VM-Series NGFW provides advanced security features that include:
- Application control: Policies can be set to allow or deny traffic based on applications rather than just ports and protocols, providing more granular control.
- Threat prevention: Includes capabilities, like Intrusion Prevention System (IPS), anti-virus, anti-spyware and anti-malware, helping to ensure that traffic is inspected for threats in real-time.
- URL filtering: Controls access to websites based on categories, helping prevent access to malicious or unwanted sites.
- User identification: Policies can be enforced based on user identities, integrating with directory services like LDAP.
Deployment and configuration steps
1. Provision VPCs:
- Set up the trust, untrust and management VPCs with appropriate subnets and firewall rules.
2. Deploy Palo Alto VM-Series Firewalls:
- Launch VM instances of the Palo Alto firewalls in the trust and untrust VPCs.
- Configure interfaces for management, untrusted and trusted networks.
3. Set up load balancers:
- Configure Google Cloud network load balancers for both ingress (external load balancer in untrust VPC) and egress (internal load balancer in trust VPC) traffic.
4. Configure session resiliency:
- Deploy Redis in the management VPC and integrate it with the firewalls to maintain session state.
5. Establish PSA connections:
- Set up private service access to connect to Google Cloud-managed services securely.
6. Enable private Google Cloud access:
- Configure routes and DNS for VMs to access Google Cloud APIs and services privately.
7. Centralize management with Panorama:
- Deploy Panorama in the management VPC and connect it to the Palo Alto firewalls for centralized management.
8. Implement security policies:
- Define and enforce security policies in the Palo Alto firewalls, including application control, threat prevention and URL filtering.
9. Test and monitor:
- Perform thorough testing to help ensure traffic is correctly routed and inspected.
- Set up monitoring and alerting to keep track of firewall health and traffic patterns.
Technical details
In this section, we will delve into various technical aspects that play a crucial role in the setup and operation of the Palo Alto Networks VM-Series NGFW. Understanding these elements is essential for helping to ensure that the firewall is properly configured to secure your cloud environment effectively.
We will explore the following key components:
Supported regions
The infrastructure will be deployed across several global regions, helping to ensure high availability and performance. For example: The supported regions include:
- europe-west2 (London)
- europe-west3 (Frankfurt)
- us-east4 (Virginia)
- asia-southeast1 (Singapore)
- africa-south1 (Johannesburg)
Egress traffic flow
Egress traffic, which is the outbound traffic from customer workloads, follows a specific path. This traffic is first routed via VPC Peering to an internal TCP/UDP load balancer within the trust VPC. The backend of this load balancer consists of Palo Alto VMs, which inspect the traffic for security purposes. Once inspected, the traffic is forwarded to the Cloud NAT, enabling secure outbound connections to the internet or other destinations.
Egress Traffic Flow Flowchart
Ingress traffic flow
Ingress traffic refers to inbound traffic from external sources, such as the internet, to applications hosted in the workload subnets. This traffic can enter the network through various methods, including:
- External load balancer: Directly routes traffic to the intended applications.
- Cloudflare or similar services (optional): Used for content delivery and protection.
If there is a need for this inbound traffic to be filtered by a NGFW, the traffic is first routed through an external load balancer in the untrust VPC. This load balancer then distributes the traffic to the Palo Alto Instance Group for inspection before reaching the workload subnets.
Ingress Traffic Flowchart
Shared VPCs
Workload subnets are hosted within shared VPCs. When the limits or quotas of a VPC are reached, additional VPCs can be created and peered with the trust VPC. Shared VPCs are managed within dedicated network projects by CloudHub, and the subnets within these shared VPCs are made available to client projects where the workloads are deployed.
To help ensure that workloads are isolated and secure, communication between different workloads is restricted to only allow outbound connections to the internet. This isolation is enforced using Google Cloud firewall policies, which are stateful Layer 4 rules applied directly to the network interfaces of virtual machines (NICs).
Shared VPC Architecture Diagram
Google Cloud managed services and Private Service Access
Certain Google Cloud services, such as Cloud SQL, Memorystore, Filestore, Vertex AI and others, are provided as managed services within Google Cloud-managed VPCs. To access these services securely, a private service access (PSA) peering connection is required. Each shared VPC will have its own dedicated PSA connection, which follows best practices for environment separation and minimizes the number of VPC peering connections.
Additionally, the management VPC will establish a PSA connection to host Redis clusters used by Palo Alto, and the trust VPC will have a PSA connection to potentially host other shared services, such as Google Cloud Backup and DR Service, in the future.
Google Cloud Private and Managed Services Access Workflow Diagram
Private access to Google Cloud APIs
VMs that are configured with only internal IP addresses can still access Google Cloud APIs and services (such as Google Cloud Storage, BigQuery, etc.) privately using Private Google Access. This feature helps ensure that all traffic to Google Cloud services remains within the Google Cloud network, even though it might appear to use an internet gateway.
Private Google Access should be enabled on all subnets, with exceptions applied as needed. Routing configurations are also required to direct traffic to Google Cloud APIs, with next hops set to the default internet gateway. Even though it’s termed an "internet gateway," this setup helps ensure that the traffic stays within the Google Cloud network. Additionally, DNS configurations must be established to route this traffic to the correct IP addresses.
Session resiliency on Palo Alto VMs
To help ensure session resiliency in Palo Alto VM deployments, a Redis server will be deployed in the management VPC. Session resiliency allows the Palo Alto firewalls to maintain continuous session handling even during failure events. Google Cloud NLBs play a crucial role by detecting and deregistering any unhealthy Palo Alto firewalls within a horizontally scalable cluster behind it.
With session resiliency enabled, when the NLB identifies an unhealthy Palo Alto firewall, it rehashes the ongoing traffic sessions that were directed to the affected firewall and redirects them to a healthy firewall in the cluster. This capability helps ensure that the Palo Alto firewall cluster can continue inspecting long-lived application sessions, even if one of the appliances fails.
To support this failover process and maintain session continuity, a Memorystore for Redis instance is required. The Redis cache stores session information, so when the NLB detects an unhealthy firewall and reroutes traffic to a healthy VM, the new VM can access the session data from the Redis cache. This allows the healthy VM to continue inspecting and forwarding the traffic seamlessly, preserving the integrity and continuity of ongoing sessions.
5. Implementation strategy
Resources required
The components in this checklist are common to deploying a VM-Series firewall that you manage directly or with Panorama. Additional requirements apply for the Panorama plugin for services, such as Stackdriver monitoring, VM monitoring, auto scaling and securing Kubernetes deployments.
Always consult the Compatibility Matrix for Panorama plugin information for public clouds. This release requires the following software:
- Google Cloud account: You must have a Google Cloud user account with a linked email address, and you must know the username and password for that email address.
- Google Cloud SDK: If you have not done so, install Google Cloud SDK, which includes Google Cloud APIs, cloud and other command line tools. You can use the command line interface to deploy the firewall template and other templates.
- PAN-OS on VM-Series firewalls on Google Cloud: VM-Series firewalls running a PAN-OS version are available from the Google Cloud Marketplace.
- VM-Series firewalls: VM-Series firewalls that you want to manage from Panorama must be deployed in Google Cloud using a Palo Alto Networks’ image from the Google Cloud Marketplace. Firewalls must meet the minimum system requirements for the VM-Series Firewall on Google Cloud.
- VM-Series licenses: You must license a VM-Series firewall to obtain a serial number. A serial number is required to add a VM-Series firewall as a Panorama managed device. If you are using the Panorama plugin for Google Cloud to deploy VM-Series firewalls you must supply a BYOL auth code. The Google Cloud Marketplace handles your service billing, but the firewalls you deploy will directly interface with the Palo Alto Networks licensing server.
- VM-Series plugin on the firewall: VM-Series firewalls running PAN-OS 9.0 and later include the VM-Series plugin, which manages integration with public and private clouds. As shown in the Compatibility Matrix, the VM-Series plugin has a minimum version that corresponds to each PAN-OS release.
When there is a major PAN-OS upgrade, the VM-Series plugin version is automatically upgraded. For minor releases, it is up to you to determine whether a VM-Series plugin upgrade is necessary, and if so, perform a manual upgrade. See Install the VM-Series Plugin on Panorama.
- Panorama running in management mode: A Panorama physical or virtual appliance running a PAN-OS version that is the same or later than the managed firewalls. Virtual instances do not need to be deployed in Google Cloud. You must have a licensed version of Panorama. Panorama must have network access to the VPCs in which the VMs you want to manage are deployed.
If you intend to manage VMs deployed in Google Cloud, or configure features such as auto-scaling, your PAN-OS and VM-Series plugin versions must meet the public cloud requirements to support the Panorama plugin for Google Cloud. For VM-Series plugin on Panorama see Install the VM-Series Plugin on Panorama.
- Panorama plugin for Google Cloud version 2.0.0: The Google Cloud plugin manages the interactions required to license, bootstrap and configure firewalls deployed with the VM monitoring or auto-scaling templates. The Google Cloud plugin, in conjunction with the VM Monitoring or Auto Scaling templates, uses Panorama templates, template stacks and device groups to program NAT rules that direct traffic to managed VM-Series firewalls.
Step-by-step implementation plan
You can use the Google Cloud Marketplace to deploy the VM-Series firewall on a fixed vCPU capacity license (VM-Series models). The licensed images available from public clouds are:
- VM-Series Next-Generation Firewall Bundle 1
- VM-Series Next-Generation Firewall Bundle 2
- VM-Series Next-Generation Firewall (BYOL)
The Marketplace deploys an instance of the VM-Series Firewall with a minimum of one management interface and two dataplane interfaces (trust and untrust). You can add additional data plane interfaces for up to five Google Cloud Compute Engine instances in your VPC.
Before you deploy the VM-Series firewall, you must create or choose a project in your organization and create any networks and subnets that will connect to the firewall, as described in VPC Network Planning and Network Interface Planning.
You cannot attach multiple network interfaces to the same VPC network. Every interface you create must have a dedicated network with at least one subnet. Ensure that your networks include any additional data plane instances you create.
Step 1: Choose a bootstrap method.
Step 2: Locate the VM-Series firewall listing in the Google Cloud Marketplace.
- Log in to the Google Cloud Console.
- From the Products and Services menu, select Marketplace.
- Search for VM-Series.
- Select one of the VM-Series firewall licensing options.
Step 3: Click Launch on Compute Engine.
Step 4: Name the instance and choose resources.
Enter the Deployment Name (this name is displayed in the Deployment Manager). The name must be unique and cannot conflict with any other deployment in the project.
Select a Zone. See Regions and Zones for a list of supported zones.
Select a Machine Type based on the VM-Series System Requirements for your license and the Minimum System Requirements for the VM-Series Firewall on Google Cloud.
Step 5: Specify instance metadata.
Step 6: The options Bootstrap Bucket and Interface Swap affect the initial configuration the first time the VM-Series firewall boots.
Bootstrap Bucket (Optional): If you plan to use a bootstrap file, enter the name of a storage bucket, or the path to a folder within the storage bucket that contains the bootstrap package. You need permission to access the storage bucket. For example:
vmseries-bootstrap-gce-storagebucket=<bucketname>
or
vmseries-bootstrap-gce-storagebucket=<bucketname/directoryname>
If you choose to bootstrap with custom metadata, continue to Step 6.
Interface Swap (optional): Swap the Management interface (eth0) and the first dataplane interface (eth1) at deployment time. Interface swap is only necessary when you deploy the VM-Series firewall behind the Google Cloud HTTP(S) Load Balancing. For details, see Management Interface Swap for Google Cloud Platform Load Balancing.
SSH key—Paste in the public key from an SSH key pair. Follow the instructions for your OS in SSH Key Pair, to create, copy and paste the key. Windows users must view the key in PuTTY, copy from the user interface and paste into Marketplace deployment.
If the key is not formatted properly, the VM-Series firewall does not allow you to log in. You must delete the deployment and start over.
Click More to reveal additional metadata options. The options blockProjectKeys, and enableSerialConsole are properties of the instance. you can change these metadata values after a successful deployment.
- blockProjectKeys (optional): If you Block Project Keys, you can use only the public SSH key you supply to access the instance.
- enableSerialConsole (optional): Interacting with the Serial Console enables you to monitor instance creation and perform interactive debugging tasks.
Specify custom metadata.
If you choose to bootstrap with custom metadata, add any key-value pairs that you did not add in Step 5. See init-cfg.txt File Components for the list of key-value pairs. For example:
Custom Metadata Specifications
Configure the boot disk:
Boot disk type: Select from SSD Persistent disk or Standard Persistent Disk. See Storage Options.
Enter the boot disk size: 60GB is the minimum size. You can edit the disk size later, but you must stop the VM to do so.
Configure the management interface:
Management VPC network name: Choose an existing network.
Management subnet name: Choose an existing subnet.
Enable external IP for management interface (optional): If you enable this option, you can use the IP address assigned to the VM-Series firewall management interface to use SSH to access the VM-Series firewall web interface.
Enable Google Cloud Firewall rule for connections to management interface (optional): This option automatically creates a Google Cloud firewall Allow rule for an external source IP address that you supply.
Source IP in Google Cloud Firewall rule for connections to management interface: If you Enable Google Cloud Firewall rule for connections to management interface, enter a source IP address or a CIDR block.
- Do not use 0.0.0.0/0. Supply an IP address or a CIDR block that corresponds to your dedicated management IP addresses or network. Do not make the source network range larger than necessary.
- Verify the address to ensure that you do not lock yourself out.
Configure the untrust dataplane interface:
Untrust VPC network name: Choose an existing network.
Untrust subnet name: Choose an existing subnet.
Enable external IP for untrust: Enable Google Cloud to provide an ephemeral IP address to act as the external IP address.
Configure the trust dataplane interface:
Trust VPC network name: Choose an existing network.
Trust subnet name Choose an existing network.
Enable external IP for trust: Enable Google Cloud to provide an ephemeral IP address to act as the external IP address.
Configure additional interfaces: You must enter the number of dataplane interfaces you want to add. The default is 0 (none). The deployment page always displays fields for five additional dataplanes numbered 4 through 8.
Additional dataplane interfaces: Enter the number of additional dataplane instances. If this number is 0 (default), dataplane numbers 4 through 8 are ignored even if you fill out the interface fields. If, for example, you specify 2 and then fill out information for three interfaces, only the first two are created.
Additional dataplane # VPC name: Choose an existing network.
Dataplane # Subnet name—Choose a subnet that exists.
Enable External IP for dataplane # interface: Enable Google Cloud to provide an ephemeral IP address to act as the external IP address.
Deploy the instance:
Use Google Cloud Deployment Manager to view and manage your deployment.
Use the CLI to change the administrator password on the firewall.
Log in to the VM-Series firewall from the command line. In your SSH tool, connect to the External IP for the management interface, and specify the path to your private key.
Windows users: Use PuTTY to connect to the VM-Series firewall and issue command line instructions. To specify the path to the private key, select ConnectionSSHAuth. In private key file for authentication: click browse to select your private key.
Enter configuration mode:
VMfirewall> configure
Enter the following command:
VMfirewall# set mgt-config users admin password
Enter and confirm a new password for the administrator.
Commit your new password:
VMfirewall# commit
Return to command mode:
VMfirewall# exit
(Optional) If you used a bootstrap file for interface swap, use the following command to view the interface mapping:
VMfirewall> debug show vm-series interfaces all
Access the VM-Series Firewall web interface:
- In a browser, create a secure (https) connection to the IP address for the management interface. If you get a network error, check to see that you have a Google Cloud firewall rule that allows the connection.
- When prompted, enter the username (admin) and the administrator password you specified from the CLI.
- (Optional) If you bootstrapped, then Verify Bootstrap Completion.
If you see problems, search the log information on the VM-Series firewall. Choose MonitorSystem and, in the manual search field, enter description contains 'bootstrap' and look for a message in the results that indicates that the bootstrap was successful.
After you log in to the firewall, you can add administrators and create interfaces, zones, NAT rules and policy rules, just as you would on a physical firewall
6. Benefits and ROI
Quantifying the ROI for the Palo Alto and Google Cloud integrated solution involves considering both direct and indirect benefits that can be measured in financial terms. Here’s a detailed breakdown of potential ROI and quantifiable returns:
Direct financial benefits
1. Reduction in security breaches and data loss
Cost savings from prevented breaches: The average cost of a data breach was $4.45 million in 2023, according to the IBM Cost of a Data Breach Report. By preventing even a single significant breach, organizations can save substantial amounts. Assuming a conservative prevention of one breach every two years, the annual savings would be around $2.225 million.
2. Lower data transfer costs
Savings on egress charges: VPC peering within the same region incurs no egress charges. For a company transferring significant volumes of data monthly between VPCs, the cost savings can be substantial as egress charges are eliminated. This can lead to notable annual savings, depending on the data transfer volume and the Google Cloud pricing model.
3. Optimized resource utilization
Cost efficiency from optimized traffic management: Efficient traffic routing and firewall management can lead to better resource use, potentially reducing the need for additional instances. If resource optimization leads to a 10% reduction in cloud spend, for an organization spending $1 million annually on cloud infrastructure, this translates to $100,000 in savings per year.
Indirect financial benefits
1. Improved operational efficiency
Reduced management overhead: Simplifying security management can reduce the time that IT staff spend on configuring and managing security policies. If an organization saves 10 hours per week in IT management time, at an average IT salary of $100 per hour, the annual savings would be approximately $52,000.
2. Enhanced compliance and reduced fines
Avoidance of non-compliance penalties: Meeting compliance standards can prevent costly fines. For example, non-compliance with GDPR can result in fines up to €20 million or 4% of global annual revenue. Avoiding even a single compliance penalty can save millions, depending on the organization’s revenue size.
3. Increased productivity
Reduced downtime: Enhanced security and optimized performance can lead to fewer disruptions. Assuming a 1% increase in productivity due to reduced downtime, for a company with $50 million in annual revenue, the productivity gain could translate to $500,000 annually.
Quantifiable returns
Case study evidence
- Financial institution example: A financial services company that deployed the integrated Palo Alto and Google Cloud solution reported a 50% reduction in security incidents and a 20% reduction in cloud operational costs. If its annual cloud spend was $5 million, this translates to $1 million in savings annually from reduced incidents and operational efficiencies.
Performance improvements
- Faster deployment times: Automating security policy enforcement can reduce deployment times by 30%. If an organization deploys new services monthly and each deployment previously took 10 hours, the annual time saved would be 36 hours. At a rate of $100 per hour, this represents a saving of $3,600 annually.
ROI calculation example
Assuming the following conservative estimates for an organization:
- Prevented breach savings: $2,225,000 annually
- Data transfer savings: $14,400 annually
- Resource optimization savings: $100,000 annually
- Management overhead reduction: $52,000 annually
- Avoided compliance penalties: $1,000,000 (assuming a penalty every few years)
- Increased productivity: $500,000 annually
- Deployment time savings: $3,600 annually
Total annual savings (approximately) = $3,895,000
If the annual cost of the integrated Palo Alto and Google Cloud solution is $500,000, the formula to calculate the ROI would be be as follows:
𝑅𝑂𝐼 = Total annual savings − Annual cost × 100 ROI = Annual cost
Total Annual Savings − Annual Cost × 100
Substitution of values:
𝑅𝑂𝐼 = (3,895,000 − 500,000) / 500,000 × 100
Final calculation:
𝑅𝑂𝐼 = 679%
The ROI for the integrated Palo Alto and Google Cloud solution is significant, with direct financial benefits from preventing breaches and optimizing resource use, as well as indirect benefits from improved operational efficiency, compliance and productivity. These quantifiable returns underscore the value of investing in robust cloud security and performance solutions. Find out more in our Google Cloud Services page.

Recent Posts
Deploy Palo Alto Firewall on Google Cloud
March 13th, 2025
The 2025 State of Cloud Report
January 14th, 2025
Create Custom Chatbot with Azure OpenAI and Azure AI Search
December 10th, 2024
Upgrade Palo Alto Firewall and GlobalProtect for November 2024 CVE
November 26th, 2024
Ready for Lift Off: The Community-Driven Future of Runway
November 20th, 2024