Google Cloud networking in depth: Faster, more reliable connectivity with HA VPN and 100 Gbps Dedicated Interconnect

How Grasshopper uses BigQuery and Cloud Dataflow for their real-time financial data app
May 16, 2019
Emotet: The Story of Disposable C2 Servers
May 16, 2019


Editor’s note: Fresh off several additions to the Google Cloud networking stack at Next ’19, here’s the latest installment in our blog series exploring the five pillars of the portfolio: ‘Connect,’ ‘Scale,’ ‘Secure,’ ‘Optimize’ and ‘Modernize.’ Today, we do a deep dive into our recent Cloud Interconnect and VPN announcements. Stay tuned in the coming weeks as we explore the Google Cloud networking pillars in depth.

Regardless of your company’s size, budget, or specific cloud needs, Google Cloud has a solution for connecting your infrastructure to our cloud, whether it’s a high performance option such as Cloud Interconnect (Dedicated/Partner), Cloud VPN for lower bandwidth needs, or Direct/Carrier peering for easy access to G Suite.

google cloud vpn.png

High Availability VPN with a 99.99% SLA

Regardless of which connectivity option you use, you probably also use Cloud VPN to securely connect your on-premises environment to Google Cloud Platform (GCP). At Next ’19, we announced an advanced VPN option for customers with mission-critical connectivity requirements: High Availability (HA) VPN, now in beta. With HA VPN, enterprises can connect their on-premises deployment to a GCP Virtual Private Cloud (VPC) with an industry-leading SLA of 99.99% at general availability, plus simplified setup compared to creating redundant VPNs.

With its 99.99% SLA, HA VPN adds a whole extra ‘nine’ of availability over our traditional Cloud VPN. But what does that mean in practice? The table below shows the maximum allowed monthly and annual downtimes for different SLA levels.

google cloud vpn sla.png

In other words, with the 99.99% SLA offered by HA VPN, you get a significantly lower allowed downtime/unavailability–up to 4.5 minutes of monthly downtime or 53 minutes annually. This is currently the highest SLA and uptime guarantee of any public cloud provider.

Technical architecture
HA VPN includes a new high availability VPN gateway with two interfaces (interface 0, interface 1), each with its own external IP address. This architecture is designed to have no single point of failure at the VPN backend. Each region has two sets of IP blocks (shards) with totally independent network routing stacks. HA VPN uses separate IP pools per region, pre-allocated for each shard. Customers create two tunnels from the two interfaces to their on-prem VPN gateway, and HA VPN ensures these redundant tunnels are on different blocks – ensuring a resilient architecture.

Technical architecture.png

Deployment options
You can use HA VPN in two modes: Active/Active, in which both redundant tunnels carry traffic under normal operation, or Active/Passive, in which one tunnel actively carries traffic while the other one acts as a backup. Tunnels connected to the new gateways must use dynamic (BGP) routing, and you can configure these modes by changing route priorities (MED). To ensure there is no traffic loss in case of failure, we recommend that you deploy HA VPN in an Active/Passive setup, such that the passive tunnel can take over all traffic from the active tunnel during a failure. If you select an Active/Active configuration, you must make sure that the combined traffic for both tunnels is within the capacity of one tunnel to provide a consistent bandwidth experience during a failure.

Compare HA VPN to the traditional approach of achieving high availability with classic VPN: by manually creating a redundant VPN. In addition to providing a higher SLA, HA VPN makes it easy to set up redundant VPNs that seamlessly and automatically fail over their traffic to the second tunnel in the event of a failure.

google on premise network.gif

Move data faster with 100 Gbps Interconnect
Another announcement we made at Next ’19 was 100 Gbps Dedicated Interconnect, which enables and accelerates bandwidth-heavy applications with 10X the circuit bandwidth for your hybrid and multi-cloud deployments.

Utilizing Google Cloud Storage for archiving or disaster recovery or performing massive data processing with BigQuery can require a lot of bandwidth. The following table illustrates the speed gains when upgrading from 10 Gbps to 100 Gbps. Depending on the size of the dataset, applications that were not possible at 10 Gbps (daily backup of 100 TB of data, estimated at 30 hours) become possible at 100 Gbps (three hours).

network bandwidth.png

Enable secure connectivity at high speed
Cloud Interconnect also lets you directly connect your on-prem networks to GCP’s network without traversing the public internet, thus increasing speed and security.

secure connectivity.png

With Dedicated Interconnect, you can meet with Google’s peering edge directly through your router in order to connect to GCP workloads using our large global private network. Dedicated Interconnect is available at either 10 Gbps or 100 Gbps, and you can configure multiple links as a Link Aggregation Group (LAG) for even higher bandwidth (lower bandwidth connections are available through partners for less bandwidth-intensive applications). This solution also allows you to extend your on-prem or datacenter networks to GCP using RFC1918 addresses, simplifying hybrid cloud deployments.

For more information about Dedicated Interconnect, including connectivity requirements, 99.99% vs. 99.9% availability and a list of colocation facilities where you can meet Google, please see the Dedicated Interconnect Overview page.

Let’s connect
HA VPN and 100 Gbps Dedicated Interconnect are just the latest examples of how we’re working to give you the right options to connect your business to GCP. Let us know how you plan to use these new networking services and what capabilities you’d like in the future. You can learn more about GCP’s cloud networking portfolio online and reach us at

Leave a Reply

Your email address will not be published. Required fields are marked *