KiND: Kubernetes in Docker
December 25, 2020 - 12 min read
Kubernetes, but in Docker
In my last post, I briefly explained what exactly was the deprecation of Dockershim, why it did make sense and how it could affect you (spoiler: not that much). I also finished the post with the promise of a little tutorial on how to use KiND as a local Kubernetes cluster for the development and/or testing of your applications. This is the tutorial.
However, before we begin, you might be wondering why running Kubernetes inside a Docker container is a good idea? Wasn't Docker deprecated on Kubernetes, after all?
If you paid any attention, the deprecation refers only to the use of Docker Engine as a container runtime into a Kubernetes cluster. Docker images, Dockerfiles, and all the tooling you're used to still works, and will keep working since they've been following the OCI the whole time. If this is still unclear, you can read my post about the situation and the official Kubernetes blog for more details.
Ok, with that out of our way, why we should bother with it? There are a few reasons:
- It is future-proof. KiND already uses containderd as a runtime, so we don't have to worry about Docker engine. And yes, it is kinda funny to have a Kubernetes cluster that doesn't run Docker as a runtime... running inside a Docker container.
- It runs as a normal docker container. This means you can drop/recreate your Kubernetes cluster as easily as you can with your other Docker containers.
- It was designed for testing Kubernetes itself. This means that when Kubernetes evolves, KiND quickly catches up.
There is, however, a big disclaimer: KiND is still technically in beta (you can check the roadmap here). This means that not everything is perfect and a lot of things are still not fully implemented or open to change. That said, I'm using it for the past 5 months as a local development environment and had no problems at all. If you're unsure of using minikube or if you're having trouble running it, give KiND a chance.
Installing and creating a cluster
The first thing we need to do is, as expected, install KiND. If you have go
available, you can download and compile the latest version easily with go get: go get sigs.k8s.io/kind@v0.9.0
. Make sure your $GOPATH/bin
is added to the $PATH
variable and you're all set. Easy!
There is another installation option available for arkade users; just type arkade get kind
and you should have a working installation in no time. Of course, as a last resort, you can always download the compiled binary and install it manually if you wish.
Regardless of the option used, creating a cluster is very straightforward:
# Be wary, this will take a while the first time
$ kind create cluster
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.19.1) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! π
# Let's list our clusters.
# There is only one, but we could create more
# by using kind create cluster --name foo
$ kind get clusters
kind
# To prove it is working, we can get the
# cluster information. If we had more than one
# cluster we need to use the --context flag
$ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:44281
KubeDNS is running at https://127.0.0.1:44281/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# We can also see our cluster by using docker container ls
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b6c1bd36039 kindest/node:v1.19.1 "/usr/local/bin/entrβ¦" About a minute ago Up About a minute 127.0.0.1:39593->6443/tcp kind-control-plane
Easy, right? With just a few commands we have a new Kubernetes cluster up and running inside a docker container, and thus, completely isolated from our host machine.
Deploying an application
Everything looks ok, but how about we deploy a sample application to make sure our cluster is really working? To test, we will deploy an Apache instance with standard configuration, so first let's create a file, deployment.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-depl
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd:2.4
As you can see, this is a simple as it gets. The next step is to apply the configuration:
# Apply the configuration
$ kubectl apply -f deployment.yaml
deployment.apps/apache-depl created
# (Optional) wait for everything to be deployed
$ kubectl rollout status deploy/apache-depl
Waiting for deployment "apache-depl" rollout to finish: 0 of 1 updated replicas are available...
deployment "apache-depl" successfully rolled out
# Let's check if everything went smoothly
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/apache-depl-6cf5b9f8d4-qgd8g 1/1 Running 0 36s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/apache-depl 1/1 1 1 36s
NAME DESIRED CURRENT READY AGE
replicaset.apps/apache-depl-6cf5b9f8d4 1 1 1 36s
As expected, a new deployment and a new pod were created successfully. However, to be able to "see" it working, we also need to expose our set of pods by using a service, so go ahead and create the service.yaml
file, with the following contents:
apiVersion: v1
kind: Service
metadata:
name: apache-srv
spec:
selector:
app: apache
type: ClusterIP
ports:
- name: apache
protocol: TCP
port: 8080
targetPort: 80
Note how we "expose" port 8080 to the rest of the cluster, but internally the httpd
container runs port 80. We don't need to do this "remapping of ports", I did it just as an example (and because I'm not a big fan of using port 80 locally, in general). Let's apply the changes and check if we can access our Apache server:
# Apply the configuration
$ kubectl apply -f service.yaml
service/apache-srv created
# Apparently, it is working
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-srv ClusterIP 10.98.201.81 <none> 8080/TCP 10s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 81m
# Forward 8080 (of the service) to 8000 of the local machine
# Press C-c to exit
$ kubectl port-forward service/apache-srv 8000:8080
Forwarding from 127.0.0.1:8000 -> 80
Forwarding from [::1]:8000 -> 80
Note how we remapped the 8080 port of the service to the 8000 port on the local machine. Again, this isn't needed (we could've just used 8080:8080
), but I've decided to do it just to show you an example. Anyway, if you open http://localhost:8000 on your browser you should be able to see the classic It works Apache starting page. When you're done, hit C-c
to stop the port forwarding.
Ingress
While port forwarding is good enough to do a quick test, in more realistic scenarios, we should use an ingress controller. On minikube
we can easily enable the NGINX Ingress controller with minikube addons enable ingress
, but on KiND the process is a bit more involved: first, we need to create a cluster configured with extraPortMappings
to allow localhost access to the Ingress controller, then deploy ingress-nginx
itself. To do that, first create the config.yaml
file:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
Now, we will delete our current cluster and create a new one, with the correct configuration:
# Drop our current cluster
$ kind delete cluster
Deleting cluster "kind" ...
# Create a new one, with the correct configuration
$ kind create cluster --config config.yaml
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.19.1) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! π
# Deploy Ingress-NGINX
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
# It may take a few seconds for the deploy to finish
$ kubectl rollout status -n ingress-nginx deploy/ingress-nginx-controller
Waiting for deployment "ingress-nginx-controller" rollout to finish: 0 of 1 updated replicas are available...
deployment "ingress-nginx-controller" successfully rolled out
Great, now we need to create the ingress configuration to make our Apache server available at /apache
. This is done by creating the ingress.yaml
file, with the following contents:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apache
pathType: Prefix
backend:
service:
name: apache-srv
port:
number: 8080
Note that we specify that /apache
will point to apache-srv:8080
. This would normally mean that it would redirect to /apache
inside our apache server, but thanks to the rewrite-target
annotation, it will land on the root of our apache server. Of course, if your applications/services each have a unique prefix, you can drop this annotation altogether.
With that configuration created, we now need to deploy our application (Apache), the service, and the ingress configuration:
$ kubectl apply -f deployment.yaml
deployment.apps/apache-depl created
$ kubectl apply -f service.yaml
service/apache-srv created
$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/ingress-srv created
Now, try to access http://localhost (yes, on port 80). You should see the standard NGINX 404 page (of course, we don't have anything to serve). However, when you try http://localhost/apache (again, port 80), you will see the good old It works! page from Apache. This happens because our Ingress is correctly working, and routing requests from a given path (/apache
) to a given service (apache-srv:8080
).
Loading images in the cluster
The last thing we need to try is to use local images (i. e. not published to DockerHub) in our cluster. To do that, let's clone the NodeJS application used in my previous article on minikube, build a local image, and deploy:
# Clone the repository
$ git clone https://github.com/ibraimgm/k8s-demo.git
Cloning into 'k8s-demo'...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 11 (delta 0), reused 11 (delta 0), pack-reused 0
Receiving objects: 100% (11/11), 5.76 KiB | 5.76 MiB/s, done.
# Cd to it
$ cd k8s-demo
# Build a local image (full output omitted)
$ docker build -t user/demo .
Sending build context to Docker daemon 81.92kB
(...)
Successfully built 2d80f09d290a
Successfully tagged user/demo:latest
# Deploy
$ kubectl apply -f demo-depl.yaml
deployment.apps/demo-depl created
# Wait, what?
(node v13.13.0) $ kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-depl-6cf5b9f8d4-7j8cz 1/1 Running 0 44m
demo-depl-54988b45cc-dqhtf 0/1 ErrImagePull 0 6s
Wait a second... If we have a local image available, why we're receiving ErrImagePull
? As explained in the minikube article, just because we have an image locally in docker, doesn't mean that our cluster can "see" it. With minikube
, we need to go to a "special" environment and rebuild the image there, but with KiND, we need to only load the image into the cluster:
# first, clean up the previous messy deployment
$ kubectl delete -f demo-depl.yaml
deployment.apps "demo-depl" deleted
# Load the existing image into the cluster
$ kind load docker-image user/demo:latest
Image: "user/demo:latest" with ID "sha256:2d80f09d290af51122f7189a846d740281f4c4f9249b49e65084d821320562d4" not yet present on node "kind-control-plane", loading...
# Deploy again
$ kubectl apply -f demo-depl.yaml
deployment.apps/demo-depl created
# Now our pod is running
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-depl-6cf5b9f8d4-7j8cz 1/1 Running 0 52m
demo-depl-54988b45cc-2dtzb 1/1 Running 0 8s
And just like that, we have a running pod! How about we put this on ingress too? The cloned repository already have demo-svc.yaml
file, but it exposes the service as a node port; instead of using it, we will edit demo-depl.yaml
and add both a ClusterIP
service and the appropriate NGINX configuration together (you can separate multiple definitions in Kubernetes by using --
). In the end, demo-depl.yaml
should have the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-depl
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: user/demo
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: demo-srv
spec:
selector:
app: demo
type: ClusterIP
ports:
- name: demo
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress-srv
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /demo/(.*)
pathType: Prefix
backend:
service:
name: demo-srv
port:
number: 8080
Note how we used a regular expression on the path matching (/demo/(.*)
) and rewrote it to /$1
. This means that if we access /demo/foo
, this will be redirected to /foo
on the demo service. Now, apply the changes with kubectl apply -f demo-depl.yaml
and look at http://localhost/demo/time, which should return the current timestamp. Of course, our earlier /apache
URL is still working, unaffected by these changes (thanks to the different value used in metadata.name
field).
Conclusion
For me, KiND works really well in a development environment and is quite easy to set up, since the only real requirement is Docker, which you should have a running instance anyway. Creating and dropping clusters is a breeze (so, if you screw something up, just start from scratch), and loading local images in the cluster is easier and less error-prone than with minikube
. Ingress setup is a bit more annoying than what I would expect, but still pretty easy to use, and allows the use of different controllers, like Ambassador and Countour (check the documentation for examples).
In the end, I think KiND is a very good replacement for minikube
. It is simpler to use, has very good performance and cluster creation is painless. If you have problems with minikube
installation or is on the fence on what solution to use for a local cluster, give KiND a chance. I'm pretty sure you will fall in love with it too.
Code Overload
Personal blog ofRafael Ibraim.
Rafael Ibraim works as a developer since the early 2000's (before it was cool). He is passionate about creating clean, simple and maintainable code.
He lives a developer's life.