Disaster recovery of single node Kubernetes control plane

Overview

There are many possible root causes why control plane might become unavailable. Lets review most common scenarios and mitigation steps.

Mitigation steps in this article build around AWS public cloud features, but all popular public cloud offerings have similar functionality.

Apiserver VM shutdown or apiserver crashing

Results

  • unable to stop, update, or start new pods, services, replication controller
  • existing pods and services should continue to work normally, unless they depend on the Kubernetes API
Continue reading Disaster recovery of single node Kubernetes control plane

Thoughts on High available Kubernetes cluster with single control plane node

Why single node control plane?

Benefits are:

  • Monitoring and alerting are simple and on point. It reduce the number of false positive alerts.
  • Setup and maintenance are quick and straightforward. Less complex install process lead to more robust setup.
  • Disaster recovery and recovery documentation are more clear and shorter.
  • Application will continue to work even if Kubernetes control plane is down.
  • Multiple worker nodes and multiple deployment replicas will provide necessary high availability for your applications.

Disadvantages are:

  • Downtime of control plane node make it impossible to change any Kubernetes object. For example to schedule new deployments, update application configuration or to add/remove worker nodes.
  • If worker node goes down during control plane downtime when it will not be able to re-join the cluster after recovery.

Conclusions:

  • If you have a heavy load on Kubernetes API like frequent deployments from many teams then you might consider to use multi control plane setup.
  • If changes to Kubernetes objects are infrequent and your team can tolerate a bit of downtime when single control plane Kubernetes cluster can be great choice.

How to enable minikube kvm2 driver on Ubuntu 18.04

Verify kvm2 support

Confirm virtualization support by CPU

 egrep -c ‘(svm|vmx)’ /proc/cpuinfo

An output of 1 or more indicate that CPU can use virtualization technology.

sudo kvm-ok

Output “KVM acceleration can be used. ” indicate that the system has virtualization enabled and KVM can be used.

Continue reading How to enable minikube kvm2 driver on Ubuntu 18.04

How to organize Namespaces in Kubernetes

There are two main objectives:

  1. Users are able to do their job with the highest velocity possible
  2. Users organized by groups in multi tenant setup 

Multi tenancy

Kubernetes namespaces help to setup boundaries between groups of users and applications in a cluster.
To make it more pleasant and secure for your users to work in shared cluster Kubernetes has a number of policies and controls.

Access policies

RBAC primary objective is authorize users and applications to do specific operations in the namespace or in whole cluster. Use RBAC to give your users enough permissions in the namespace, so they can do day to day operations on their own.
Network Policy control how pods can communicate with each other. Use it to firewall traffic between namespaces or inside namespace to critical components like Databases.

Continue reading How to organize Namespaces in Kubernetes

120 Days of AWS EKS in Staging

Felix Georgii wakeboarding at Wake Crane Project in Pula, Croatia on September 25, 2016

My journey with Kubernetes started with Google Kubernetes Engine then one year later with self managed kuberntes and then with migration to Amazon EKS.

EKS as a managed kubernetes cluster is not 100% managed. Core tools didn’t work as expcted. Customers expectation was not aligned with functions provided. Here I have summarized all our experience we gained by running EKS cluster in Staging.

To run EKS you still have to:

  • Prepare network layer: VPC, subnets, firewalls…
  • Install worker nodes
  • Periodically apply security patches on workers nodes
  • Monitor worker nodes health by install node problem detector and monitoring stack
  • Setup security groups and NACLs
  • and more
Continue reading 120 Days of AWS EKS in Staging

Kubernetes sidecar pattern: nginx ssl proxy for nodejs

I learn about sidecar pattern from Kubernetes documentation and later from blog post by Brendan Burns The distributed system toolkit. Sidecar is very useful pattern and work nice with Kubernetes.
In the tutorial I want to demonstrate how “legacy” application can be extend with https support by using  sidecar pattern based on Kubernetes.

Problem

We have legacy application which doesn’t have HTTPS support. We also don’t want to send plain text traffic over network. We don’t want to make any changes to legacy application, but good thing that it is containerised.

Solution

We will use sidecar pattern to add HTTPS support to “legacy” application.

Overview

Main application
For our example main application I will use Nodejs Hello World service (beh01der/web-service-dockerized-example)
Sidecar container 
To add https support I will use Nginx ssl proxy (ployst/nginx-ssl-proxy) container

Deployment

TLS/SSL keys
First we need to generate TLS certificate keys and add them to Kubernetes secrets. For that I am using script from nginx ssl proxy repository which combine all steps in one:
git clone https://github.com/ployst/docker-nginx-ssl-proxy.git
cd docker-nginx-ssl-proxy
./setup-certs.sh /path/to/certs/folder

Adding TLS files to Kubernetes secrets

cd /path/to/certs/folder
kubectl create secret generic ssl-key-secret --from-file=proxykey=proxykey --from-file=proxycert=proxycert --from-file=dhparam=dhparam

Kubernetes sidecar deployment

In following configuration I have defined main application container “nodejs-hello” and nginx container “nginx”. Both containers run in the same pod and share pod resources, so in that way implementing sidecar pattern. One thing you want to modify is hostname, I am using not existing hostname appname.example.com for this example.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nodejs-hello
  labels:
    app: nodejs
    proxy: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nodejs-hello
  template:
    metadata:
      labels:
        app: nodejs-hello
    spec:
      containers:
      - name: nodejs-hello
        image: beh01der/web-service-dockerized-example
        ports:
        - containerPort: 3000
      - name: nginx
        image: ployst/nginx-ssl-proxy
        env:
        - name: SERVER_NAME
          value: "appname.example.com"
        - name: ENABLE_SSL
          value: "true"
        - name: TARGET_SERVICE
          value: "localhost:3000"
        volumeMounts:
          - name: ssl-keys
            readOnly: true
            mountPath: "/etc/secrets"          
        ports:
        - containerPort: 80
          containerPort: 443
      volumes:
      - name: ssl-keys
        secret:
          secretName: ssl-key-secret

Save this file to deployment.yaml and create deployment Kubernetes object:

kubectl create -f deployment.yaml

Wait for pods to be Read:

kubectl get pods

NAME                            READY     STATUS    RESTARTS   AGE
nodejs-hello-686bbff8d7-42mcn   2/2       Running   0          1m

Testing

For testing I setup two port forwarding rules. First is for application port and second for nginx HTTPS port:

kubectl -n test port-forward <pod> 8043:443
#and in new terminal window run
kubectl -n test port-forward <pod> 8030:3000

First lets validate that application respond on http and doesn’t respond on https requests

#using http
curl -k -H "Host: appname.example.com" http://127.0.0.1:8030/ 
Hello World! 
I am undefined!

#now using https
curl -k -H "Host: appname.example.com" https://127.0.0.1:8030/ 
curl: (35) Server aborted the SSL handshake

Note: SSL handshake issue is expected as our “legacy” application doesn’t support https and even if it would it must serve https connection on different port than http. The test goal was to demonstrate the response.

Time to test connection through sidecar nginx ssl proxy

curl -k  -H "Host: appname.example.com" https://127.0.0.1:8043/
Hello World!
I am undefined!

Great! We have got expected output through https connection.

Conclusions

  • Nginx extended nodejs app with https support with zero changes to any of containers
  • Sidecar pattern modular structure provide great re-use of containers, so teams can be focused on application development
  • Ownership of containers can be split between teams as there is no dependency between containers
  • Scaling might not be very efficient, because sidecar container have to scale with main container