ASUS ShadowHammer Episode – A Custom Made Supply Chain Attack
March 29, 2019Healthcare Data Under Attack | 7 Best Practices for Healthcare Security
April 1, 2019Welcome back to our blog post series on Service Mesh and Istio.
In our previous posts, we talked about what the Istio service mesh is, and why it matters. Then, we dove into demos on how to bring Istio into production, from safe application rollouts and security, to SRE monitoring best practices.
Today, leading up to Google Cloud NEXT ’19, we’re talking all about using Istio across environments, and how Istio can help you unlock the power of hybrid cloud.
Why hybrid?
Hybrid cloud can take on many forms. Typically, hybrid cloud refers to operating across public cloud and private (on-premises) cloud, and multi-cloud means operating across multiple public cloud platforms.
Adopting a hybrid- or multi-cloud architecture could provide a ton of benefits for your organization. For instance, using multiple cloud providers helps you avoid vendor lock-in, and allows you to choose the best cloud services for your goals. Using both cloud and on-premises environments allows you to simultaneously enjoy the benefits of the cloud (flexibility, scalability, reduced costs) and on-prem (security, lower latency, hardware re-use). And if you’re looking to move to the cloud for the first time, adopting a hybrid setup lets you do so at your own pace, in the way that works best for your business.
Based on our experience at Google, and what we hear from our customers, we believe that adopting a hybrid service mesh is key to simplifying application management, security, and reliability across cloud and on-prem environments–no matter if your applications run in containers, or in virtual machines. Let’s talk about how to use Istio to bring that hybrid service mesh into reality.
Hybrid Istio: a mesh across environments
One key feature of Istio is that it provides a services abstraction for your workloads (Pods, Jobs, VM-based applications). When you move to a hybrid topology, this services abstraction becomes even more crucial, because now you have not just one, but many environments to worry about.
When you adopt Istio, you get all the management benefits for your microservices on one Kubernetes cluster–visibility, granular traffic policies, unified telemetry, and security. But when you adopt Istio across multiple environments, you are effectively giving your applications new superpowers. Because Istio is not just a services abstraction on Kubernetes. Istio is also a way to standardize networking across your environments. It’s a way to centralize API management and decouple JWT validation from your code. It’s a fast-track to a secure, zero-trust network across cloud providers.
So how does all this magic happen? Hybrid Istio refers to a set of sidecar Istio proxies (Envoys) that sit next to all your services across your environments–every VM, every container–and know how to talk to each other across boundaries. These Envoy sidecars might be managed by one central Istio control plane, or by multiple control planes running in each environment.
Let’s dive into some examples.
Multicluster Istio, one control plane
One way to enable hybrid Istio is by configuring a remote Kubernetes cluster that “calls home” to a centrally-running Istio control plane. This setup is useful if you have multiple GKE clusters in the same GCP project, but Kubernetes pods in both clusters need to talk to each other. Use cases for this include: production and test clusters through which you canary new features, standby clusters ready to handle failover, or redundant clusters across zones or regions.
This demo spins up two GKE clusters in the same project, but across two different zones (us-central and us-east). We install the Istio control plane on one cluster, and Istio’s remote components (including the sidecar proxy injector) on the other cluster. From there, we can deploy a sample application spanning both Kubernetes clusters.