Istio sidecar injection

There are several ways to inject istio sidecar configuration into Pods. For example: automated injection, YAML/JSON deployment update, using Helm or Kustomize and update of existing live deployment. We will look into each of them.

Automatic Sidecar injection

Istio uses ValidatingAdmissionWebhooks for validating Istio configuration and MutatingAdmissionWebhooks for automatically injecting the sidecar proxy into user pods.

For automatic side car injection to work should be enabled:

$ kubectl api-versions | grep

Step two is to verify MutatingAdmissionWebhook and ValidatingAdmissionWebhook plugins are listed in the kube-apiserver –enable-admission-plugins. That can be done by cluster administrators.

When the injection webhook is enabled, any new pods that are created will automatically have a sidecar added to them.

To enable namespace for sidecar injection label the namespace with istio-injection=enabled

$ kubectl label namespace default istio-injection=enabled
$ kubectl get namespace -L istio-injection
default        Active    1h        enabled
istio-system   Active    1h
kube-public    Active    1h
kube-system    Active    1h

Sidecar injection with istioctl on YAML file

To manually inject side into deployment, use istioctl kube-inject

$ istioctl kube-inject -f deployment.yaml | kubectl apply -f -

Sidecar injection into existing deployment

$ kubectl get deployment -o yaml | istioctl kube-inject -f - | kubectl apply -f -

Sidecar injection with istioctl and helm

Sidecar injection into helm release could be done in two steps. We will use helm install and helm template to inject sidecar. As down side some features as rollback of helm release wouldn’t work, only rolling forward would be possible.

First. Using helm install we install the package:

$ helm install nginx stable/nginx

Step two update the deployment with sidecar using helm template:

$ helm template stable/nginx | istioctl kube-inject -f - | kubectl apply -f -

Sidecar injection with kustomize

Deployment file:

- deployments.yaml

To inject istio sidecar into deployment Kustomize patch should be used

- path: sidcar.yaml
    kind: Deployment

Where sidecar.yaml is istio sidecar deployment.


There are many ways to install istio sidecar or any sidecar into deployment. The main idea is to render deployment file and wrap it up with istioctl for manual injection or setup automatic injection with Admission webhook.

How to organize Namespaces in Kubernetes

There are two main objectives:

  1. Users are able to do their job with the highest velocity possible
  2. Users organized by groups in multi tenant setup 

Multi tenancy

Kubernetes namespaces help to setup boundaries between groups of users and applications in a cluster.
To make it more pleasant and secure for your users to work in shared cluster Kubernetes has a number of policies and controls.

Access policies

RBAC primary objective is authorize users and applications to do specific operations in the namespace or in whole cluster. Use RBAC to give your users enough permissions in the namespace, so they can do day to day operations on their own.
Network Policy control how pods can communicate with each other. Use it to firewall traffic between namespaces or inside namespace to critical components like Databases.

Resource controls

By default Pod can utilize as many compute resources as available.
Resource Quotas control the amount of compute and storage resources which Pod can use in namespace.
Limit Range help to prevent one Pod from utilize of all resources in namespace. LimitRange set minimum and maximum boundaries for compute and storage resource per Pod.

Application Security

Pod security policy control security sensitive aspects of container. Examples are privileged containers, use of host namespace and many other.
Open Policy Agent is very powerful policy framework which help to create custom policies for applications and users in a cluster. For example:

  • force users to use a specific label in Kubernetes objects like Service or Deployment
  • deny access to pull :latest images tag
  • allow to pull images only from specific docker registry


Following examples could help you to decide on namespaces boundaries and naming:

  • Namespace per team
  • Namespace per team and project
  • Namespace per application
  • Namespace per git branch name

Namespace should provide enough self managing autonomy for users and be in sync with applications requirements.
The bigger namespace the harder to tune up it’s boundaries, at the same time many small namespaces could create additional operational work for cluster administrators.

Namespace per team and project is optimal start which should work for most organizations.

Let me know your experience in comments and have a great day!

120 Days of AWS EKS in Staging

Felix Georgii wakeboarding at Wake Crane Project in Pula, Croatia on September 25, 2016

My journey with Kubernetes started with Google Kubernetes Engine then one year later with self managed kuberntes and then with migration to Amazon EKS.

EKS as a managed kubernetes cluster is not 100% managed. Core tools didn’t work as expcted. Customers expectation was not aligned with functions provided. Here I have summarized all our experience we gained by running EKS cluster in Staging.

To run EKS you still have to:

  • Prepare network layer: VPC, subnets, firewalls…
  • Install worker nodes
  • Periodically apply security patches on workers nodes
  • Monitor worker nodes health by install node problem detector and monitoring stack
  • Setup security groups and NACLs
  • and more

EKS Staging how to?

EKS Setup

  • Use terraform EKS module or eksctl to make installation and maintenance easier.

EKS Essentials

  • Install node problem detector to monitor for unforeseen kernel or docker issues
  • Scale up kube-dns to two or more instances
  • See more EKS core tips in 90 Days EKS in Production

EKS Autoscaling

  • Kubernetes cluster autoscaling is no doubt must have addition to EKS toolkit. Scale your cluster up and down to 0 instances if you wish. Base your scaling on cluster state of Pending/Running pods to get maximum from it.
  • Kubernetes custom metrics, node exporter and kube state metrics is must have to enable horizonal pod autoscaling based on build in metrics like cpu/memory and as well on application specific metrics like request rate or data throughput.
  • Prometheus and cadvisor is another addition you would need to enable metrics collection

Ingress controller

  • Istio one of the most advanced, but breaking changes and beta status might introduce hard to debug bugs
  • Contour looks like good replacement to Istio. It didn’t have that good community support as istio, but stable enough and has quite cool CRD IngressRoute which makes Ingress fun to use
  • Nginx ingress is battle tested and has the best support from community. Have huge number of features, so is a good choice to setup the most stable environment

Statefull applications

  • Ensure you have enough nodes in each AZ where data volumes are. Good start is to create dedicated node group for each AZ with minimum number of nodes needed.
  • Ensure persistent volume claim(PVC) is created in desired AZ. Create dedicated storage class for specific AZ you need PVC to be in. See allowedTopologies in following example.
kind: StorageClass
  name: standard-eu-west1-a
  type: gp2
volumeBindingMode: WaitForFirstConsumer
- matchLabelExpressions:
  - key:
    - eu-west1-a


EKS is a good managed Kubernetes service. Some of mentioned tasks are common for all Kubernetes platforms, but there is a lot of space to grow for the better service. The burden for maintenance is still quite high, but fortunately Kubernetes ecosystem has a lot of opensource tools to easy it.

Have fun!