Multiple environments (Staging, QA, production, etc) with Kubernetes

Multiple Clusters Considerations

Take a look at this blog post from Vadim Eisenberg (IBM / Istio): Checklist: pros and cons of using multiple Kubernetes clusters, and how to distribute workloads between them.

I’d like to highlight some of the pros/cons:

Reasons to have multiple clusters

  • Separation of production/development/test: especially for testing a new version of Kubernetes, of a service mesh, of other cluster software
  • Compliance: according to some regulations some applications must run in separate clusters/separate VPNs
  • Better isolation for security
  • Cloud/on-prem: to split the load between on-premise services

Reasons to have a single cluster

  • Reduce setup, maintenance and administration overhead
  • Improve utilization
  • Cost reduction

Considering a not too expensive environment, with average maintenance, and yet still ensuring security isolation for production applications, I would recommend:

  • 1 cluster for DEV and STAGING (separated by namespaces, maybe even isolated, using Network Policies, like in Calico)
  • 1 cluster for PROD

Environment Parity

It’s a good practice to keep development, staging, and production as similar as possible:

Differences between backing services mean that tiny incompatibilities
crop up, causing code that worked and passed tests in development or
staging to fail in production. These types of errors create friction
that disincentivizes continuous deployment.

Combine a powerful CI/CD tool with helm. You can use the flexibility of helm values to set default configurations, just overriding the configs that differ from an environment to another.

GitLab CI/CD with AutoDevops has a powerful integration with Kubernetes, which allows you to manage multiple Kubernetes clusters already with helm support.

Managing multiple clusters (kubectl interactions)

When you are working with multiple Kubernetes clusters, it’s easy to
mess up with contexts and run kubectl in the wrong cluster. Beyond
that, Kubernetes has restrictions for versioning mismatch between the
client (kubectl) and server (kubernetes master), so running commands
in the right context does not mean running the right client version.

To overcome this:

  • Use asdf to manage multiple kubectl versions
  • Set the KUBECONFIG env var to change between multiple kubeconfig files
  • Use kube-ps1 to keep track of your current context/namespace
  • Use kubectx and kubens to change fast between clusters/namespaces
  • Use aliases to combine them all together

I have an article that exemplifies how to accomplish this: Using different kubectl versions with multiple Kubernetes clusters

I also recommend the following reads:

  • Mastering the KUBECONFIG file by Ahmet Alp Balkan (Google Engineer)
  • How Zalando Manages 140+ Kubernetes Clusters by Henning Jacobs (Zalando Tech)

Leave a Comment

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)