Go http middleware chain with context package

Middleware is a function which wrap http.Handler to do pre or post processing of the request.

Chain of middleware is popular pattern in handling http requests in go languge. Using a chain we can:

  • Log application requests
  • Rate limit requests
  • Set HTTP security headers
  • and more

Go context package help to setup communication between middleware handlers.

Implementation

Middleware chain

To compose handlers into a chain I choose Alice package which implement logic of looping through middleware functions.

chain := alice.New(security.SecureHTTP, httpRateLimiter.RateLimit, myLog.Log).Then(myAppHandler)    

http.ListenAndServe(":8080", chain) 

All source code is available at GitLab.

The only requirement of Alice that handlers should follow:

func (http.Handler) http.Handler

Example of SecureHTTP handler:

func (security *SecurityHTTPOptions) SecureHTTP(h http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		for header, value := range security.headers {
			w.Header().Add(header, value)
		}

		h.ServeHTTP(w, r)
	})
}

Context – communicator between middleware calls

Using go context package we can pass objects between middleware functions with:

  • context.WithValue
  • http.Request.WithContext
  • http.Request.Context().Value()
func (this *Logger) Log(h http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		ctx := context.WithValue(r.Context(), LoggerContextKey, this)
		t1 := time.Now()
                h.ServeHTTP(w, r.WithContext(ctx))
		t2 := time.Now()
		this.Printf("%s %q %v\n", r.Method, r.URL.String(), t2.Sub(t1))
	})
}

Where LoggerContextKey is a key by which we can use later to extract logger object:

func Recover(h http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		defer func() {
			if err := recover(); err != nil {
				mylog := r.Context().Value(logger.LoggerContextKey).(*logger.Logger)
				mylog.Printf("panic: %+v", err)
				http.Error(w, "500 - Internal Server Error", http.StatusInternalServerError)
			}
		}()

		h.ServeHTTP(w, r)
	})
}

Constructors

Sometimes you need to initialize objects before they are ready to use. To go around Alice limitation on func signature we can make a wrapper type for a particular package. Let see on log package example:

Wrapper type

type Logger struct {
	*log.Logger
}

Constructor

func InitLogger() *Logger {
	return &Logger{log.New(os.Stdout, "logger: ", log.Lmicroseconds)}
}

In the main program we use it like:

myLog := logger.InitLogger()

Conclusions

  • usually middleware functions have to be small and fast, so they don’t impact on overall application performance
  • chain is just a simple loop. You can write your own to support customized order of calls
  • create new custom type when you need to adjust other libraries to be useful in middleware pattern

You can find full demo of middleware chains at GitLab.

How to enable minikube kvm2 driver on Ubuntu 18.04

Verify kvm2 support

Confirm virtualization support by CPU

 egrep -c ‘(svm|vmx)’ /proc/cpuinfo

An output of 1 or more indicate that CPU can use virtualization technology.

sudo kvm-ok

Output “KVM acceleration can be used. ” indicate that the system has virtualization enabled and KVM can be used.

Install kvm packages

 sudo apt-get install qemu-kvm libvirt-bin bridge-utils virt-manager 

Start libvirtd service

 sudo service libvirtd start 

Add your user to libvirt and kvm group

sudo adduser `id -un` libvirt
sudo adduser `id -un` kvm

and re-login the user

 sudo login -f `id -un` 

Verify installation

 virsh list --all 
#and 
 virt-host-validate 

If above show no FAIL statuses and no errors when kvm part is done.
In case of: ” libvir: Remote error : Permission denied ” verify which group libvirtd is using

grep "unix_sock_group" /etc/libvirt/libvirtd.conf 

Check /dev/kvm is in ‘kvm’ group

ls -l /dev/kvm
crw-rw---- 1 root kvm 10, 232 Dec 10 11:15 /dev/kvm 

Check /var/run/libvirt/libvirt-sock is in ‘libvirt’ group

ls -l /var/run/libvirt/libvirt-socks
rwxrwx--- 1 root libvirt 0 Dec 10 10:45 /var/run/libvirt/libvirt-sock 

In case groups are different update the group:

sudo chown root:kvm /dev/kvm 

Now you need to either re-login or restart the kernel modules:

rmmod kvm
modprobe -a kvm

Docker machine kvm2 driver

docker-machine-driver-kvm2 driver

To support kvm2 driver minikube required docker-machine-driver-kvm2 to be installed in system $PATH. Latest minikube version will download that driver in bootstrap process. Upgrading to latest minikube will help with that step.

Update minikube settings to use kvm2 driver

minikube config set vm-driver kvm2  

If you don’t want to set kvm2 as default driver you can use --vm-driver kvm2 option in minikube start command.

Start minikube with kvm2

minikube start --memory=16384 --cpus=4 -p minikube-kvm2 --vm-driver kvm2

Expected output:

Downloading driver docker-machine-driver-kvm2
minikube v1.5.2 on Ubuntu 18.04
Creating kvm2 VM (CPUs=4, Memory=6688MB, Disk=20000MB) ...
Preparing Kubernetes v1.16.2 on Docker '18.09.9' ...

Related articles

Istio sidecar injection

There are several ways to inject istio sidecar configuration into Pods. For example: automated injection, YAML/JSON deployment update, using Helm or Kustomize and update of existing live deployment. We will look into each of them.

Automatic Sidecar injection

Istio uses ValidatingAdmissionWebhooks for validating Istio configuration and MutatingAdmissionWebhooks for automatically injecting the sidecar proxy into user pods.

For automatic side car injection to work admissionregistration.k8s.io/v1beta1 should be enabled:

$ kubectl api-versions | grep admissionregistration.k8s.io/v1beta1
admissionregistration.k8s.io/v1beta1

Step two is to verify MutatingAdmissionWebhook and ValidatingAdmissionWebhook plugins are listed in the kube-apiserver –enable-admission-plugins. That can be done by cluster administrators.

When the injection webhook is enabled, any new pods that are created will automatically have a sidecar added to them.

To enable namespace for sidecar injection label the namespace with istio-injection=enabled

$ kubectl label namespace default istio-injection=enabled
$ kubectl get namespace -L istio-injection
NAME           STATUS    AGE       ISTIO-INJECTION
default        Active    1h        enabled
istio-system   Active    1h
kube-public    Active    1h
kube-system    Active    1h

Sidecar injection with istioctl on YAML file

To manually inject side into deployment, use istioctl kube-inject

$ istioctl kube-inject -f deployment.yaml | kubectl apply -f -

Sidecar injection into existing deployment

$ kubectl get deployment -o yaml | istioctl kube-inject -f - | kubectl apply -f -

Sidecar injection with istioctl and helm

Sidecar injection into helm release could be done in two steps. We will use helm install and helm template to inject sidecar. As down side some features as rollback of helm release wouldn’t work, only rolling forward would be possible.

First. Using helm install we install the package:

$ helm install nginx stable/nginx

Step two update the deployment with sidecar using helm template:

$ helm template stable/nginx | istioctl kube-inject -f - | kubectl apply -f -

Sidecar injection with kustomize

Deployment file:

resources:
- deployments.yaml

To inject istio sidecar into deployment Kustomize patch should be used

patches:
- path: sidcar.yaml
  target:
    kind: Deployment

Where sidecar.yaml is istio sidecar deployment.

Conclusions

There are many ways to install istio sidecar or any sidecar into deployment. The main idea is to render deployment file and wrap it up with istioctl for manual injection or setup automatic injection with Admission webhook.

How to organize Namespaces in Kubernetes

There are two main objectives:

  1. Users are able to do their job with the highest velocity possible
  2. Users organized by groups in multi tenant setup 

Multi tenancy

Kubernetes namespaces help to setup boundaries between groups of users and applications in a cluster.
To make it more pleasant and secure for your users to work in shared cluster Kubernetes has a number of policies and controls.

Access policies

RBAC primary objective is authorize users and applications to do specific operations in the namespace or in whole cluster. Use RBAC to give your users enough permissions in the namespace, so they can do day to day operations on their own.
Network Policy control how pods can communicate with each other. Use it to firewall traffic between namespaces or inside namespace to critical components like Databases.

Resource controls

By default Pod can utilize as many compute resources as available.
Resource Quotas control the amount of compute and storage resources which Pod can use in namespace.
Limit Range help to prevent one Pod from utilize of all resources in namespace. LimitRange set minimum and maximum boundaries for compute and storage resource per Pod.

Application Security

Pod security policy control security sensitive aspects of container. Examples are privileged containers, use of host namespace and many other.
Open Policy Agent is very powerful policy framework which help to create custom policies for applications and users in a cluster. For example:

  • force users to use a specific label in Kubernetes objects like Service or Deployment
  • deny access to pull :latest images tag
  • allow to pull images only from specific docker registry

Namespaces


Following examples could help you to decide on namespaces boundaries and naming:

  • Namespace per team
  • Namespace per team and project
  • Namespace per application
  • Namespace per git branch name

Namespace should provide enough self managing autonomy for users and be in sync with applications requirements.
The bigger namespace the harder to tune up it’s boundaries, at the same time many small namespaces could create additional operational work for cluster administrators.

Namespace per team and project is optimal start which should work for most organizations.

Let me know your experience in comments and have a great day!

120 Days of AWS EKS in Staging

Felix Georgii wakeboarding at Wake Crane Project in Pula, Croatia on September 25, 2016

My journey with Kubernetes started with Google Kubernetes Engine then one year later with self managed kuberntes and then with migration to Amazon EKS.

EKS as a managed kubernetes cluster is not 100% managed. Core tools didn’t work as expcted. Customers expectation was not aligned with functions provided. Here I have summarized all our experience we gained by running EKS cluster in Staging.

To run EKS you still have to:

  • Prepare network layer: VPC, subnets, firewalls…
  • Install worker nodes
  • Periodically apply security patches on workers nodes
  • Monitor worker nodes health by install node problem detector and monitoring stack
  • Setup security groups and NACLs
  • and more

EKS Staging how to?

EKS Setup

  • Use terraform EKS module or eksctl to make installation and maintenance easier.

EKS Essentials

  • Install node problem detector to monitor for unforeseen kernel or docker issues
  • Scale up kube-dns to two or more instances
  • See more EKS core tips in 90 Days EKS in Production

EKS Autoscaling

  • Kubernetes cluster autoscaling is no doubt must have addition to EKS toolkit. Scale your cluster up and down to 0 instances if you wish. Base your scaling on cluster state of Pending/Running pods to get maximum from it.
  • Kubernetes custom metrics, node exporter and kube state metrics is must have to enable horizonal pod autoscaling based on build in metrics like cpu/memory and as well on application specific metrics like request rate or data throughput.
  • Prometheus and cadvisor is another addition you would need to enable metrics collection

Ingress controller

  • Istio one of the most advanced, but breaking changes and beta status might introduce hard to debug bugs
  • Contour looks like good replacement to Istio. It didn’t have that good community support as istio, but stable enough and has quite cool CRD IngressRoute which makes Ingress fun to use
  • Nginx ingress is battle tested and has the best support from community. Have huge number of features, so is a good choice to setup the most stable environment

Statefull applications

  • Ensure you have enough nodes in each AZ where data volumes are. Good start is to create dedicated node group for each AZ with minimum number of nodes needed.
  • Ensure persistent volume claim(PVC) is created in desired AZ. Create dedicated storage class for specific AZ you need PVC to be in. See allowedTopologies in following example.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard-eu-west1-a
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: failure-domain.beta.kubernetes.io/zone
    values:
    - eu-west1-a

Summary

EKS is a good managed Kubernetes service. Some of mentioned tasks are common for all Kubernetes platforms, but there is a lot of space to grow for the better service. The burden for maintenance is still quite high, but fortunately Kubernetes ecosystem has a lot of opensource tools to easy it.

Have fun!