What is a Botnet? (And Why Are They Dangerous?)
November 27, 2019Announcing Updates to VMware Skyline – November 2019
November 27, 2019Congratulations! You’ve decided to go with Google Kubernetes Engine (GKE) as your managed container orchestration platform. Your first order of business is to familiarize yourself with Kubernetes architecture, functionality and security principles. Then, as you get ready to install and configure your Kubernetes environment (on so-called day one), here are some security questions to ask yourself, to help guide your thinking.
-
How will you structure your Kubernetes environment?
-
What is your identity provider service and source of truth for users and permissions?
-
How will you manage and restrict changes to your environment and deployments?
-
Are there GKE features that you want to use that can only be enabled at cluster-creation time?
Ask these questions before you begin designing your production cluster, and take them seriously, as it’ll be difficult to change your answers after the fact.
Structuring your environment
As soon as you decide on Kubernetes, you face a big decision: how should you structure your Kubernetes environment? By environment, we mean your workloads and their corresponding clusters and namespaces, and by structure we mean what workload goes in what cluster, and how namespaces map to teams. The answer, not surprisingly, depends on who’s managing that environment.
If you have an infrastructure team to manage Kubernetes (lucky you!), you’ll want to limit the number of clusters to make it easier to manage configurations, updates and consistency. A reasonable approach is to have separate clusters for production, test, and development.
Separate clusters also make sense for sensitive or regulated workloads that have substantially different levels of trust. For example, you may want to use controls in production that would be disruptive in a development environment. If a given control doesn’t apply broadly to all your workloads, or would slow down some development teams, segment them out into separate clusters and give each dev team or service its own namespace within a cluster.
If there’s no central infrastructure team managing Kubernetes–if it’s more “every team for itself”–then each team will typically run its own cluster. This means more work and responsibility for them enforcing minimum standards–but also much more control over which security measures they implement, including upgrades.
Setting up permissions
Most organizations use an existing identity provider, such as Google Identity or Microsoft Active Directory, consistently across the environment, including for workloads running in GKE. This allows you to manage users and permissions in a single place, avoiding potential mistakes like accidentally over-granting permissions, or forgetting to update permissions as users’ roles and responsibilities change.
What permissions should each user or group have in your Kubernetes environment? How you set up your permission model is strongly tied to how you segmented your workloads. If multiple teams share a cluster, you’ll need to use Role-Based Access Control (RBAC) to give each team permissions in their own namespaces (some services automate this, providing a self-service way for a team to create and get permissions for its namespace). Thankfully, RBAC is built into Kubernetes, which makes it easier to ensure consistency across multiple clusters, including different providers. Here is an overview of access control in Google Cloud.