What Are Init Containers In Kubernetes & How To Use Them

What Are Init Containers In Kubernetes & How To Use Them

Overview

Many times, applications require some type of initialisation before they can start functioning. A common example is an execution of a constructor in class-based object-oriented programming, which initialises an object of a class. Constructors are guaranteed to run first thing during initialisation and only run once.

Similarly, your application can have pre-requisites before starting up. For example, application seed data, a special type of filesystem permissions etc. Application images are built to have only the required dependencies. It may not have the tools and dependencies required by the initialisation process. Another example is waiting for an external dependency, another service within the cluster, to start before the application starts.

Init Containers To The Rescue

Init Containers also provide the same initialisation capabilities at the pod level. Init containers are part of the pod definition, you can specify initContainers alongside the containers.

Init containers are expected to be small and run to completion in a short amount of time, except in the case where it is waiting, delaying the start of the main container until a dependency is fulfilled.

Things to note about init containers:

  1. It is possible to have multiple init containers.
  2. Init containers always run to completion and each init container should finish successfully before the next one starts. While the application containers run in parallel and the startup order is arbitrary.
  3. If the init container fails, the kubelet will keep restarting that container until it succeeds. If the restartPolicy is set to Never in the pod definition, and the init container fails, the entire pod is treated as failed.
  4. Since they are part of the same pod, they share the resource limits, filesystem and security settings.
  5. Init containers don't support any kind of health probes like livenessProbe, readinessProbe, or startupProbe.
  6. Pod level request and limit values are of the highest effective init request/limits and sum of all application containers request/limits. A side effect of this is ineffective resource allocation. A higher effective init request/limit is unused and no other pod can use it.
  7. It enables separation of concerns, the development team can focus on building the application image. The deployment team can take care of the init container for configuration and initialisation tasks only.
  8. By keeping unnecessary tools separate from the application image, it can limit the attack surface on your application container.

Init container execution sequence in a pod

Scenario: Application Deployment

The development team is building a static web application that will be hosted on Nginx in a Kubernetes cluster. We need to minimize the efforts it takes to build the image and deploy it as a pod.

Solution:

  1. We will share the data between the init container and the application container using an emptyDir volume.
  2. In the init container, clone the latest release of the web application to the mounted volume.
  3. In the application container, mount the volume with the latest release.

This is how the pod definition file will look like:

 1apiVersion: v1
 2kind: Pod
 3metadata:
 4    name: application
 5    labels:
 6    name: application
 7spec:
 8    initContainers:
 9    - name: release
10        image: alpine/git:latest
11        command:
12        - git
13        - clone
14        - https://github.com/guptaparv/static-website
15        - /var/lib/data
16        volumeMounts:
17        - name: source
18            mountPath: /var/lib/data
19    containers:
20    - name: application
21        image: nginx:alpine
22        resources:
23        limits:
24            memory: "64Mi"
25            cpu: "100m"
26        ports:
27        - containerPort: 80
28        volumeMounts:
29        - name: source
30            mountPath: /usr/share/nginx/html/
31    volumes:
32    - name: source
33        emptyDir: {}
  • Create a pod.yaml file with the above definition.

  • Schedule the pod and check the status of the pod

    1kubectl apply -f pod.yaml
    2kubectl get pods
    
  • Expose the pod as a load balancer service or any other type which will allow you to access the service using a browser

    1kubectl expose pod application --type=LoadBalancer
    2kubectl get svc
    
  • Open your browser and enter the IP address of the service. You should see the application deployed on the Nginx web server, rather than the default Nginx page.

Home page of the static web application deployed on Nginx

Conclusion

The development team was able to deploy the application without even worrying about containerising the application. This allows them to only focus on building new features of the application. The deployment team used Nginx for hosting the application, while a init container gets the latest release of the application.