In April, we introduced Anthos, Google Cloud’s new hybrid and multi-cloud platform that lets you build and manage modern hybrid applications across environments. Powered by Kubernetes and other open-source technologies, Anthos is the only software-based hybrid platform available today that lets you run your applications unmodified on existing on-premises hardware investments or in the public cloud. And with a technology stack that simplifies everything from migration to security to platform operations, Anthos makes it easy to accelerate application development and delivery.
Anthos’ arrival has generated a lot of inquiries from enterprises looking to move closer to the cloud. Here are five common questions about Anthos.
1. How do I get started with Anthos?
One of the biggest decisions companies make is whether to migrate or modernize their existing workloads. Because both processes are complicated, time-consuming, and labor-intensive, it may seem necessary to choose one or the other.
If you are deploying onto Google Cloud Platform (GCP), you can get started with Anthos simply by creating a new GKE cluster with Istio enabled in your project. If deploying on-premises, download and install GKE On-Prem, and register it with your GCP account. Once registered, you can manage your GKE On-Prem clusters just like any existing GKE cluster, as well as incorporate your services as a part of an Istio service mesh to get observability and enforce security.
In addition, with the upcoming beta of Migrate for Anthos, you can take VMs from on-prem environments, Compute Engine, or other clouds and automatically convert them into containers running in Google Kubernetes Engine (GKE). It migrates stateful workloads, and automatically transforms those workloads to run as containers in GKE pods. Once you’ve migrated and upgraded your applications into containers, you can modernize them further with added services such as service mesh, Stackdriver Logging and Monitoring, as well as other solutions for Kubernetes applications in GCP Marketplace.
2. How does Anthos help secure my environment?
Anthos seamlessly integrates security into each stage of the application lifecycle: from development, to build, and to run. Security best practices are implemented as default settings and configurations–disabling Kubernetes dashboards is one example1. We validate and test conformant Kubernetes versions, and provide patch management and incident response. Your application services are protected in a zero trust environment with authenticated and encrypted service-to-service communications using mTLS2.
Anthos also delivers a single, centralized point for enforcing policy across the fleet, whether that’s on-prem or in the cloud. As a security admin, you can let developers develop, knowing your policies are enforced. You control access based on policies and roles, not machines, with centralized configuration management through Anthos Config Management, which continuously checks cluster state for divergence in policies like RBAC and resource quotas. Finally, with a shared responsibility model between you and Google Cloud, Anthos helps reduce the burden of managing patches and performing incident response.
3. How does Anthos work across multiple environments?
Spanning multiple environments can add complexity in terms of resource management and consistency. Anthos provides a unified model for computing, networking, and even service management across clouds and data centers.
Configuration as code is one approach to managing this complexity. Anthos provides configuration as code via Anthos Config Management, which deploys the Anthos Config Management Operator to your GKE or GKE On-Prem clusters, allowing you to monitor and apply any configuration changes detected in a Git repo.
This real-time configuration management approach also provides central governance, reconciling desired state with the actual resources running across your on-prem and multi-cloud environment. Further, because it’s built on a consistent set of APIs based on open-source technologies (like Kubernetes, Istio, and Knative), developers and operators only have to learn one stack that applies to multiple cloud providers.
Anthos enables increased observability, more metrics, and telemetry across your hybrid environment, including the ability to perform zero-downtime upgrades, deploy canary releases for services, and upgrade Kubernetes cluster versions.
4. Can Anthos speed up application modernization?
To remain competitive, enterprises need to move faster than ever to build new applications, generate business-differentiating value, and continue to innovate. But developers are busy and in short supply. With Anthos, you can take existing applications and deploy them anywhere–without changing a single line of code. From there, Anthos lets you define custom workflows for building, testing, and deploying across those multiple environments.
Anthos also speeds up application modernization with Cloud Run on GKE (currently in beta), which automatically brings serverless benefits like scale up, scale down and eventing to your applications. Built on Knative, Cloud Run runs stateless HTTP containers on a fully managed environment or in your own GKE cluster.
5. What other vendors support Anthos?
We’ve developed a global partner network for Anthos to help you innovate faster, scale smarter, and stay secure. We’re expanding technology integrations with key partners, increasing the number of service partners, and doubling down on open source to make building on Google Cloud even more flexible and open. We’re working closely with more than 35 hardware, software, and system integration partners to ensure you see value from Anthos from the start. Cisco, VMware, Dell EMC, HPE, Intel, and Lenovo have committed to delivering Anthos on their own hyperconverged infrastructure, as well as with more than 20 enterprise software providers to integrate their offerings with Anthos’ unique capabilities.